🤖 AI Summary
To address the challenges of real-time resource allocation and adaptive task offloading in dynamic, heterogeneous mobile edge computing (MEC) environments, this paper formulates a joint optimization model targeting end-to-end latency minimization. It pioneers the integration of retrieval-augmented generation (RAG) into MEC resource orchestration—synergistically combining context-aware retrieval with large language model (LLM) reasoning to jointly optimize task offloading ratios, transmission power, and computational resource allocation. The proposed approach ensures strong interpretability, high scalability, and robust adaptability to environmental dynamics. Extensive simulations across diverse scenarios demonstrate that, under highly dynamic conditions—including fluctuating user device computation capabilities and heterogeneous edge servers—the method reduces end-to-end latency by 30%–86% compared to state-of-the-art deep learning baselines: 57% improvement under compute-capacity variation and 86% under server heterogeneity.
📝 Abstract
The rapid evolution of mobile edge computing (MEC) has introduced significant challenges in optimizing resource allocation in highly dynamic wireless communication systems, in which task offloading decisions should be made in real-time. However, existing resource allocation strategies cannot well adapt to the dynamic and heterogeneous characteristics of MEC systems, since they are short of scalability, context-awareness, and interpretability. To address these issues, this paper proposes a novel retrieval-augmented generation (RAG) method to improve the performance of MEC systems. Specifically, a latency minimization problem is first proposed to jointly optimize the data offloading ratio, transmit power allocation, and computing resource allocation. Then, an LLM-enabled information-retrieval mechanism is proposed to solve the problem efficiently. Extensive experiments across multi-user, multi-task, and highly dynamic offloading scenarios show that the proposed method consistently reduces latency compared to several DL-based approaches, achieving 57% improvement under varying user computing ability, 86% with different servers, 30% under distinct transmit powers, and 42% for varying data volumes. These results show the effectiveness of LLM-driven solutions to solve the resource allocation problems in MEC systems.