LSTM-Characterized Deep Reinforcement Learning for Continuous Flight Control and Resource Allocation in UAV-Assisted Sensor Network
- Publisher:
- IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
- Publication Type:
- Journal Article
- Citation:
- IEEE Internet of Things Journal, 2022, 9, (6), pp. 4179-4189
- Issue Date:
- 2022-03-15
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
LSTM-Characterized_Deep_Reinforcement_Learning_for_Continuous_Flight_Control_and_Resource_Allocation_in_UAV-Assisted_Sensor_Network.pdf | Published version | 2.65 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Unmanned aerial vehicles (UAVs) can be employed to collect sensory data in remote wireless sensor networks (WSNs). Due to UAV's maneuvering, scheduling a sensor device to transmit data can overflow data buffers of the unscheduled ground devices. Moreover, lossy airborne channels can result in packet reception errors at the scheduled sensor. This article proposes a new deep reinforcement learning-based flight resource allocation framework (DeFRA) to minimize the overall data packet loss in a continuous action space. DeFRA is based on deep deterministic policy gradient (DDPG), optimally controls instantaneous headings and speeds of the UAV, and selects the ground device for data collection. Furthermore, a state characterization layer, leveraging long short-term memory (LSTM), is developed to predict network dynamics, resulting from time-varying airborne channels and energy arrivals at the ground devices. To validate the effectiveness of DeFRA, experimental data collected from a real-world UAV testbed and energy harvesting WSN are utilized to train the actions of the UAV. Numerical results demonstrate that the proposed DeFRA achieves a fast convergence while reducing the packet loss by over 15%, as compared to the existing deep reinforcement learning solutions.
Please use this identifier to cite or link to this item: