Augmented Deep Reinforcement Learning for Online Energy Minimization of Wireless Powered Mobile Edge Computing

Publisher:
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Publication Type:
Journal Article
Citation:
IEEE Transactions on Communications, 2023, 71, (5), pp. 2698-2710
Issue Date:
2023-05-01
Full metadata record
Mobile edge computing (MEC) offers an opportunity for devices relying on wireless power transfer (WPT), to accomplish computationally demanding tasks. Such WPT-powered MEC systems have yet to be optimized for long-term efficiency, due to random and changing task demands and wireless channel states of the devices. This paper presents an augmented two-staged deep Q-network (DQN), referred to as 'TS-DQN,' for online optimization of WPT-powered MEC systems, where the WPT, offloading schedule, channel allocation, and the CPU configurations of the edge server and devices are jointly optimized to minimize the long-term average energy requirement of the systems. The key idea is to design a DQN for learning the channel allocation and task admission, while the WPT, offloading time and CPU configurations are efficiently optimized to precisely evaluate the reward of the DQN and substantially reduce its action space. Another important aspect is that a new action generation method is developed to expand and diversify the actions of the DQN, further accelerating its convergence. As validated by simulations, the proposed TS-DQN is much more energy efficient and converges much faster, than its potential alternative directly using the state-of-the-art Deep Deterministic Policy Gradient algorithm to learn all decision variables.
Please use this identifier to cite or link to this item: