🤖 AI Summary
To address single-point resource overload, high communication overhead, and elevated response latency caused by centralized decision-making in Mobile Edge Computing (MEC) task offloading, this paper proposes the Collaborative Task Offloading framework with Transformer-based Prediction (CTO-TP). CTO-TP integrates Asynchronous Multi-Agent Deep Reinforcement Learning (AMARL) with Transformer-driven temporal task prediction to enable distributed, asynchronous coordination among edge servers and dynamic joint optimization of computational resources. Compared to conventional centralized approaches, CTO-TP achieves substantial improvements: 80% reduction in end-to-end latency, 87% decrease in device energy consumption, and significant mitigation of edge server load imbalance. The framework provides a scalable, distributed paradigm for low-latency, energy-efficient, and highly reliable computation offloading—particularly suited for 6G networks.
📝 Abstract
Future networks (including 6G) are poised to accelerate the realisation of Internet of Everything. However, it will result in a high demand for computing resources to support new services. Mobile Edge Computing (MEC) is a promising solution, enabling to offload computation-intensive tasks to nearby edge servers from the end-user devices, thereby reducing latency and energy consumption. However, relying solely on a single MEC server for task offloading can lead to uneven resource utilisation and suboptimal performance in complex scenarios. Additionally, traditional task offloading strategies specialise in centralised policy decisions, which unavoidably entail extreme transmission latency and reach computational bottleneck. To fill the gaps, we propose a latency and energy efficient Cooperative Task Offloading framework with Transformer-driven Prediction (CTO-TP), leveraging asynchronous multi-agent deep reinforcement learning to address these challenges. This approach fosters edge-edge cooperation and decreases the synchronous waiting time by performing asynchronous training, optimising task offloading, and resource allocation across distributed networks. The performance evaluation demonstrates that the proposed CTO-TP algorithm reduces up to 80% overall system latency and 87% energy consumption compared to the baseline schemes.