Joint Speed Control and Energy Replenishment Optimization for UAV-assisted IoT Data Collection with Deep Reinforcement Transfer Learning

Publisher:
Institute of Electrical and Electronics Engineers (IEEE)
Publication Type:
Journal Article
Citation:
IEEE Internet of Things Journal, 2022, PP, (99), pp. 1-1
Issue Date:
2022-01-01
Full metadata record
Unmanned aerial vehicle (UAV)-assisted data collection has been emerging as a prominent application due to its flexibility, mobility, and low operational cost. However, under the dynamic and uncertainty of IoT data collection and energy replenishment processes, optimizing the performance for UAV collectors is a very challenging task. Thus, this paper introduces a novel framework that jointly optimizes the flying speed and energy replenishment for each UAV to significantly improve the overall system performance (e.g., data collection and energy usage efficiency). Specifically, we first develop a Markov decision process to help the UAV automatically and dynamically make optimal decisions under the dynamics and uncertainties of the environment. Although traditional reinforcement learning algorithms such as Q-learning and deep Q-learning can help the UAV to obtain the optimal policy, they often take a long time to converge and require high computational complexity. Therefore, it is impractical to deploy these conventional methods on UAVs with limited computing capacity and energy resource. To that end, we develop advanced transfer learning techniques that allow UAVs to “share” and “transfer” learning knowledge, thereby reducing the learning time as well as significantly improving learning quality. Extensive simulations demonstrate that our proposed solution can improve the average data collection performance of the system up to 200% and reduce the convergence time up to 50% compared with those of conventional methods.
Please use this identifier to cite or link to this item: