Cooperative Task Offloading through Asynchronous Deep Reinforcement Learning in Mobile Edge Computing for Future Networks

📅 2025-04-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address single-point resource overload, high communication overhead, and elevated response latency caused by centralized decision-making in Mobile Edge Computing (MEC) task offloading, this paper proposes the Collaborative Task Offloading framework with Transformer-based Prediction (CTO-TP). CTO-TP integrates Asynchronous Multi-Agent Deep Reinforcement Learning (AMARL) with Transformer-driven temporal task prediction to enable distributed, asynchronous coordination among edge servers and dynamic joint optimization of computational resources. Compared to conventional centralized approaches, CTO-TP achieves substantial improvements: 80% reduction in end-to-end latency, 87% decrease in device energy consumption, and significant mitigation of edge server load imbalance. The framework provides a scalable, distributed paradigm for low-latency, energy-efficient, and highly reliable computation offloading—particularly suited for 6G networks.

Technology Category

Application Category

📝 Abstract
Future networks (including 6G) are poised to accelerate the realisation of Internet of Everything. However, it will result in a high demand for computing resources to support new services. Mobile Edge Computing (MEC) is a promising solution, enabling to offload computation-intensive tasks to nearby edge servers from the end-user devices, thereby reducing latency and energy consumption. However, relying solely on a single MEC server for task offloading can lead to uneven resource utilisation and suboptimal performance in complex scenarios. Additionally, traditional task offloading strategies specialise in centralised policy decisions, which unavoidably entail extreme transmission latency and reach computational bottleneck. To fill the gaps, we propose a latency and energy efficient Cooperative Task Offloading framework with Transformer-driven Prediction (CTO-TP), leveraging asynchronous multi-agent deep reinforcement learning to address these challenges. This approach fosters edge-edge cooperation and decreases the synchronous waiting time by performing asynchronous training, optimising task offloading, and resource allocation across distributed networks. The performance evaluation demonstrates that the proposed CTO-TP algorithm reduces up to 80% overall system latency and 87% energy consumption compared to the baseline schemes.
Problem

Research questions and friction points this paper is trying to address.

Optimizes task offloading in Mobile Edge Computing for future networks
Addresses uneven resource utilization in single MEC server scenarios
Reduces latency and energy consumption via cooperative edge-edge frameworks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Asynchronous multi-agent deep reinforcement learning
Transformer-driven Prediction for task offloading
Edge-edge cooperation for resource optimization
🔎 Similar Papers
No similar papers found.
Y
Yuelin Liu
High Performance Networks (HPN) Research Group, Smart Internet Lab, University of Bristol, Bristol, UK
H
Haiyuan Li
High Performance Networks (HPN) Research Group, Smart Internet Lab, University of Bristol, Bristol, UK
X
Xenofon Vasilakos
High Performance Networks (HPN) Research Group, Smart Internet Lab, University of Bristol, Bristol, UK
Rasheed Hussain
Rasheed Hussain
Associate Professor in Intelligent Networks Security, Smart Internet Lab & BDFI, Univ. of Bristol
Future Networks SecurityAI SecurityResponsible AIDigital Twin securityBlockchain
D
Dimitra Simeonidou
High Performance Networks (HPN) Research Group, Smart Internet Lab, University of Bristol, Bristol, UK