Leveraging Digital Twin and DRL for Collaborative Context Offloading in C-V2X Autonomous Driving

Publisher:
Institute of Electrical and Electronics Engineers (IEEE)
Publication Type:
Journal Article
Citation:
IEEE Transactions on Vehicular Technology, 2023, PP, (99), pp. 1-16
Issue Date:
2023-01-01
Full metadata record
Digital Twin (DT) technology, as a promising technology, can achieve the vehicular contexts mapping of the virtual world and physical world in a collaborative autonomous driving (CAD) system. DT technology is developed on the basis of C-V2X, 6G, Mobile Edge Computing (MEC), Machine Learning (ML) and other technologies, which can enable the creation of robust and reliable digital twin-based collaborative autonomous driving architectures, providing a platform for testing, validating, and refining autonomous driving systems in a highly efficient and safe manner. However, the future large-scale CAD system needs greater real-time processing and resource collaboration capability for autonomous vehicles (AVs). Especially considering the mobility of AVs, it puts higher demands on the management of AVs. In this paper, we present a digital twin (DT)-based collaborative autonomous driving (DTCAD) three-layer architecture in C-V2X to provide better resource management of AVs. In order to improve the Quality of Service (QoS) and reduce the processing latency in large-scale CAD scenarios, a scalable Deep Reinforcement Learning and Mean Field Game method (DDPG-MFG) are proposed, where the dynamic and real-time interaction between AVs is approximated as a mean-field gaming process in DT resource allocation. Especially, to improve the interaction efficiency between AVs and CAD environment, we design more efficient exploitation and exploration algorithms for AVs. The CARLA simulation demonstrates our proposed algorithm significantly reduces the task offloading latency, and improves the average rewards by 28.5%, 3.5%, and 6.8%, compared with traditional DDPG, TD3, and AC, respectively.
Please use this identifier to cite or link to this item: