LLM-Driven Stationarity-Aware Expert Demonstrations for Multi-Agent Reinforcement Learning in Mobile Systems

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address severe non-stationarity in multi-agent reinforcement learning (MARL) on edge devices—arising from synchronous policy updates—this paper proposes RELED, a novel framework. First, it theoretically bounds non-stationarity to filter high-quality expert trajectories generated by large language models. Second, it introduces a hybrid expert-agent policy optimization module that adaptively integrates expert demonstrations with autonomous exploration. Operating within a distributed MARL architecture, RELED jointly enhances training stability and policy convergence. Experiments are conducted on a realistic urban traffic network derived from OpenStreetMap. Results demonstrate that RELED significantly outperforms state-of-the-art methods: it improves performance by +12.7%, accelerates convergence by 2.3×, and strengthens generalization across unseen scenarios. By reconciling sample efficiency, stability, and scalability under stringent edge-device constraints, RELED establishes a new paradigm for deploying MARL in resource-constrained edge environments.

Technology Category

Application Category

📝 Abstract
Multi-agent reinforcement learning (MARL) has been increasingly adopted in many real-world applications. While MARL enables decentralized deployment on resource-constrained edge devices, it suffers from severe non-stationarity due to the synchronous updates of agent policies. This non stationarity results in unstable training and poor policy con vergence, especially as the number of agents increases. In this paper, we propose RELED, a scalable MARL framework that integrates large language model (LLM)-driven expert demonstrations with autonomous agent exploration. RELED incorporates a Stationarity-Aware Expert Demonstration module, which leverages theoretical non-stationarity bounds to enhance the quality of LLM-generated expert trajectories, thus providing high reward and training-stable samples for each agent. Moreover, a Hybrid Expert-Agent Policy Optimization module adaptively balances each agent's learning from both expert-generated and agent-generated trajectories, accelerating policy convergence and improving generalization. Extensive experiments with real city networks based on OpenStreetMap demonstrate that RELED achieves superior performance compared to state-of-the-art MARL methods.
Problem

Research questions and friction points this paper is trying to address.

Addresses non-stationarity in multi-agent reinforcement learning systems
Improves policy convergence stability with LLM-enhanced expert demonstrations
Optimizes hybrid learning from expert and agent-generated trajectories
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-driven expert demonstrations for MARL training
Stationarity-aware module enhances trajectory quality
Hybrid optimization balances expert and agent learning
🔎 Similar Papers
No similar papers found.
T
Tianyang Duan
Division of Computer Science, The University of Hong Kong, Hong Kong SAR, China
Z
Zongyuan Zhang
Division of Computer Science, The University of Hong Kong, Hong Kong SAR, China
Z
Zheng Lin
Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
S
Songxiao Guo
Division of Computer Science, The University of Hong Kong, Hong Kong SAR, China
X
Xiuxian Guan
Division of Computer Science, The University of Hong Kong, Hong Kong SAR, China
Guangyu Wu
Guangyu Wu
Aon Centre for Innovation and Analytics, AON
Machine LearningData MiningSocial Network AnalysisArtificial IntelligenceRecommender Systems
Z
Zihan Fang
Department of Computer Science, City University of Hong Kong, Hong Kong SAR, China
H
Haotian Meng
China Unicom Digital Technology, China Unicom co.,Ltd, China
Xia Du
Xia Du
Xiamen University of Technology
adversarial machine learning
J
Ji-Zhe Zhou
School of Computer Science, Engineering Research Center of Machine Learning and Industry Intelligence, Sichuan University, Chengdu, China
Heming Cui
Heming Cui
University of Hong Kong
Operating SystemsProgramming LanguageDistributed SystemsSecurity
J
Jun Luo
College of Computing and Data Science, Nanyang Technological University, Singapore
Y
Yue Gao
Institute of Space Internet, Fudan University, Shanghai, China, and the School of Computer Science, Fudan University, Shanghai, China