🤖 AI Summary
To address the conflict between multi-task real-time computing demands and limited in-vehicle computational resources in vehicular networks, this paper proposes a vehicle-infrastructure cooperative edge computing framework integrating digital twin (DT) technology with multi-agent deep reinforcement learning (MADRL). The framework jointly optimizes task offloading decisions and computing resource allocation across vehicular edge computing (VEC) servers within a single time slot, overcoming limitations of conventional static modeling and centralized optimization. We innovatively design a multi-task DT system to enable dynamic, collaborative decision-making among vehicles, and develop a distributed MADRL algorithm for low-overhead, highly adaptive online optimization. Experimental results demonstrate significant improvements over baseline approaches: an 18.7% increase in task completion rate, a 23.4% reduction in average latency, and a 31.2% improvement in resource utilization.
📝 Abstract
With the increasing demand for multiple applications on internet of vehicles. It requires vehicles to carry out multiple computing tasks in real time. However, due to the insufficient computing capability of vehicles themselves, offloading tasks to vehicular edge computing (VEC) servers and allocating computing resources to tasks becomes a challenge. In this paper, a multi task digital twin (DT) VEC network is established. By using DT to develop offloading strategies and resource allocation strategies for multiple tasks of each vehicle in a single slot, an optimization problem is constructed. To solve it, we propose a multi-agent reinforcement learning method on the task offloading and resource allocation. Numerous experiments demonstrate that our method is effective compared to other benchmark algorithms.