Energy Efficient Task Offloading in UAV-Enabled MEC Using a Fully Decentralized Deep Reinforcement Learning Approach

๐Ÿ“… 2025-08-09
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Joint optimization of task offloading, user association, and UAV trajectory in UAV-assisted multi-access edge computing (MEC) is challenged by non-convexity, unpredictable channel conditions and user positions due to mobility, and poor scalability and robustness arising from semi-centralized architectures with high communication overhead. Method: We propose a fully decentralized deep reinforcement learning framework that integrates graph attention networks (GATs) to model local topological relationships and employs experience- and parameter-sharing proximal policy optimization (EPS-PPO) for distributed cooperative decision-makingโ€”relying solely on local observations and neighbor-to-neighbor communication. Contribution/Results: Experiments demonstrate significant improvements over baseline methods (e.g., MADDPG) in energy efficiency, task completion rate, and latency. The framework further achieves superior scalability and system robustness under dynamic network conditions and heterogeneous user mobility.

Technology Category

Application Category

๐Ÿ“ Abstract
Unmanned aerial vehicles (UAVs) have been recently utilized in multi-access edge computing (MEC) as edge servers. It is desirable to design UAVs' trajectories and user to UAV assignments to ensure satisfactory service to the users and energy efficient operation simultaneously. The posed optimization problem is challenging to solve because: (i) The formulated problem is non-convex, (ii) Due to the mobility of ground users, their future positions and channel gains are not known in advance, (iii) Local UAVs' observations should be communicated to a central entity that solves the optimization problem. The (semi-) centralized processing leads to communication overhead, communication/processing bottlenecks, lack of flexibility and scalability, and loss of robustness to system failures. To simultaneously address all these limitations, we advocate a fully decentralized setup with no centralized entity. Each UAV obtains its local observation and then communicates with its immediate neighbors only. After sharing information with neighbors, each UAV determines its next position via a locally run deep reinforcement learning (DRL) algorithm. None of the UAVs need to know the global communication graph. Two main components of our proposed solution are (i) Graph attention layers (GAT), and (ii) Experience and parameter sharing proximal policy optimization (EPS-PPO). Our proposed approach eliminates all the limitations of semi-centralized MADRL methods such as MAPPO and MA deep deterministic policy gradient (MADDPG), while guaranteeing a better performance than independent local DRLs such as in IPPO. Numerical results reveal notable performance gains in several different criteria compared to the existing MADDPG algorithm, demonstrating the potential for offering a better performance, while utilizing local communications only.
Problem

Research questions and friction points this paper is trying to address.

Optimizing UAV trajectories for energy-efficient MEC task offloading
Decentralizing UAV control to reduce communication overhead and bottlenecks
Enhancing performance using local DRL without global network knowledge
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fully decentralized deep reinforcement learning approach
Graph attention layers for local observations
Experience sharing proximal policy optimization
๐Ÿ”Ž Similar Papers
No similar papers found.
H
Hamidreza Asadian-Rad
Department of Electrical Engineering, Iran University of Science and Technology (IUST), Tehran 1684613114, Iran
Hossein Soleimani
Hossein Soleimani
Assistant professor at Iran University of Science and Technology
Cellular networks5GLTESensor NetworksDeep learning
Shahrokh Farahmand
Shahrokh Farahmand
Iran University of Science and Technology