🤖 AI Summary
To address the challenge of joint control-communication optimization in sixth-generation (6G) vehicular digital twin networks (VDTNs), this paper proposes a multi-timescale joint decision-making framework grounded in Value of Information (VoI). We introduce a novel dual-VoI metric that separately quantifies state-awareness value and control-utility value, thereby unifying control performance and communication resource requirements into a coherent representation. A dual deep reinforcement learning (DRL) architecture is designed to iteratively optimize decisions at both macroscopic scheduling and microscopic transmission levels. Integrating digital twin modeling with VoI-based quantification, the framework is evaluated in platooning driving simulations. Results demonstrate significant improvements: end-to-end latency is reduced by 32%, and trajectory tracking error decreases by 41%. These outcomes validate the effectiveness and practicality of the proposed control-communication co-optimization approach for 6G VDTNs.
📝 Abstract
The vision of sixth-generation (6G) wireless networks paves the way for the seamless integration of digital twins into vehicular networks, giving rise to a Vehicular Digital Twin Network (VDTN). The large amount of computing resources as well as the massive amount of spatial-temporal data in Digital Twin (DT) domain can be utilized to enhance the communication and control performance of Internet of Vehicle (IoV) systems. In this article, we first propose the architecture of VDTN, emphasizing key modules that center on functions related to the joint optimization of control and communication. We then delve into the intricacies of the multitimescale decision process inherent in joint optimization in VDTN, specifically investigating the dynamic interplay between control and communication. To facilitate the joint optimization, we define two Value of Information (VoI) concepts rooted in control performance. Subsequently, utilizing VoI as a bridge between control and communication, we introduce a novel joint optimization framework, which involves iterative processing of two Deep Reinforcement Learning (DRL) modules corresponding to control and communication to derive the optimal policy. Finally, we conduct simulations of the proposed framework applied to a platoon scenario to demonstrate its effectiveness in ensu