Multi-Agent DRL for Queue-Aware Task Offloading in Hierarchical MEC-Enabled Air-Ground Networks

๐Ÿ“… 2025-03-05
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This paper addresses the problem of minimizing total energy consumption in a UAV-enabled IoT system supported by multi-access edge computing (MEC) within 6G integrated aerial-terrestrial networks, through joint optimization of UAV trajectory, edge computing resource allocation, and queue-aware task offloading. To this end, we formulate the first heterogeneous-agent, continuous-action-space multi-agent Markov decision process (MDP) model tailored to this scenario. We propose MAPPO-BDโ€”a multi-agent proximal policy optimization algorithm leveraging Beta-distribution-based stochastic policy modelingโ€”to effectively tackle the non-convex, nonlinear nature of the joint optimization. Furthermore, we incorporate hard constraints on queueing delay and introduce a hierarchical MEC resource coordination mechanism. Simulation results demonstrate that, compared to baseline methods, our approach achieves significant energy reduction while strictly satisfying task latency and resource constraints, and improves edge resource utilization efficiency by 32.7%.

Technology Category

Application Category

๐Ÿ“ Abstract
Mobile edge computing (MEC)-enabled air-ground networks are a key component of 6G, employing aerial base stations (ABSs) such as unmanned aerial vehicles (UAVs) and high-altitude platform stations (HAPS) to provide dynamic services to ground IoT devices (IoTDs). These IoTDs support real-time applications (e.g., multimedia and Metaverse services) that demand high computational resources and strict quality of service (QoS) guarantees in terms of latency and task queue management. Given their limited energy and processing capabilities, IoTDs rely on UAVs and HAPS to offload tasks for distributed processing, forming a multi-tier MEC system. This paper tackles the overall energy minimization problem in MEC-enabled air-ground integrated networks (MAGIN) by jointly optimizing UAV trajectories, computing resource allocation, and queue-aware task offloading decisions. The optimization is challenging due to the nonconvex, nonlinear nature of this hierarchical system, which renders traditional methods ineffective. We reformulate the problem as a multi-agent Markov decision process (MDP) with continuous action spaces and heterogeneous agents, and propose a novel variant of multi-agent proximal policy optimization with a Beta distribution (MAPPO-BD) to solve it. Extensive simulations show that MAPPO-BD outperforms baseline schemes, achieving superior energy savings and efficient resource management in MAGIN while meeting queue delay and edge computing constraints.
Problem

Research questions and friction points this paper is trying to address.

Minimize energy in MEC-enabled air-ground networks.
Optimize UAV trajectories and resource allocation.
Ensure queue delay and edge computing constraints.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent DRL optimizes UAV trajectories and resources.
MAPPO-BD enhances energy efficiency in MEC networks.
Queue-aware offloading ensures QoS in hierarchical systems.
๐Ÿ”Ž Similar Papers
No similar papers found.