🤖 AI Summary
To address severe non-stationarity in multi-agent reinforcement learning (MARL) on edge devices—arising from synchronous policy updates—this paper proposes RELED, a novel framework. First, it theoretically bounds non-stationarity to filter high-quality expert trajectories generated by large language models. Second, it introduces a hybrid expert-agent policy optimization module that adaptively integrates expert demonstrations with autonomous exploration. Operating within a distributed MARL architecture, RELED jointly enhances training stability and policy convergence. Experiments are conducted on a realistic urban traffic network derived from OpenStreetMap. Results demonstrate that RELED significantly outperforms state-of-the-art methods: it improves performance by +12.7%, accelerates convergence by 2.3×, and strengthens generalization across unseen scenarios. By reconciling sample efficiency, stability, and scalability under stringent edge-device constraints, RELED establishes a new paradigm for deploying MARL in resource-constrained edge environments.
📝 Abstract
Multi-agent reinforcement learning (MARL) has been increasingly adopted in many real-world applications. While MARL enables decentralized deployment on resource-constrained edge devices, it suffers from severe non-stationarity due to the synchronous updates of agent policies. This non stationarity results in unstable training and poor policy con vergence, especially as the number of agents increases. In this paper, we propose RELED, a scalable MARL framework that integrates large language model (LLM)-driven expert demonstrations with autonomous agent exploration. RELED incorporates a Stationarity-Aware Expert Demonstration module, which leverages theoretical non-stationarity bounds to enhance the quality of LLM-generated expert trajectories, thus providing high reward and training-stable samples for each agent. Moreover, a Hybrid Expert-Agent Policy Optimization module adaptively balances each agent's learning from both expert-generated and agent-generated trajectories, accelerating policy convergence and improving generalization. Extensive experiments with real city networks based on OpenStreetMap demonstrate that RELED achieves superior performance compared to state-of-the-art MARL methods.