๐ค AI Summary
This study addresses the challenges of poor real-time performance, inefficient resource allocation, and difficult task offloading coordination in extended reality (XR) applications within the metaverse, which stem from asymmetric 2D/3D data transmission and computationally intensive rendering. To tackle these issues, the authors propose a digital twin-driven in-network computing (INC) and edge collaboration framework. For the first time, digital twin technology is integrated with INC and formulated as a Stackelberg Markov game. A Nash-asynchronous hybrid multi-agent reinforcement learning algorithm is designed to achieve joint uplinkโdownlink optimization and equilibrium solutions. Experimental results demonstrate that the proposed approach significantly improves system utility, uplink throughput, and energy efficiency, while effectively reducing end-to-end latency and enhancing the utilization of wireless and computational resources.
๐ Abstract
Advancements in extended reality (XR) are driving the development of the metaverse, which demands efficient real-time transformation of 2D scenes into 3D objects, a computation-intensive process that necessitates task offloading because of complex perception, visual, and audio processing. This challenge is further compounded by asymmetric uplink (UL) and downlink (DL) data characteristics, where 2D data are transmitted in the UL and 3D content is rendered in the DL. To address this issue, we propose a digital twin (DT)-based in-network computing (INC)-assisted multi-access edge computing (MEC) framework that enables real-time synchronization and collaborative computing via URLLC. In this framework, a network operator manages wireless and computational resources for XR user devices (XUDs), while XUDs autonomously offload tasks to maximize their utilities. We model the interactions between XUDs and the operator as a Stackelberg Markov game, where the optimal offloading strategy constitutes an exact potential game with a Nash Equilibrium (NE), and the operator's problem is formulated as an asynchronous Markov decision process (MDP). We further propose a decentralized solution in which XUDs determine offloading decisions based on the operator's joint UL-DL optimization of offloading mode (INC-E or MEC only) and DL power allocation. A Nash-asynchronous hybrid multi-agent reinforcement learning (AMRL) algorithm is developed to predict the UL user-associated and DL transmission power, thereby achieving NE. Simulation results demonstrate that the proposed approach considerably improves system utility, uplink rate, and energy efficiency by reducing latency and optimizing resource utilization in metaverse environments.