Learning Mean Field Control on Sparse Graphs

📅 2025-01-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multi-agent reinforcement learning (MARL) struggles to achieve effective mean-field control in large-scale sparse agent networks—e.g., power-law graphs—where conventional graphon/graphex-based mean-field approaches fail due to modeling bottlenecks on low-density, highly skewed topologies. Method: This work introduces, for the first time, local weak convergence theory into the mean-field control framework, proposing a scalable local mean-field control model. It supports sparse graph sequences with finite first moments—beyond the reach of standard graphon methods—and provides rigorous theoretical guarantees of strong scalability. We further design a sparse graph sequence modeling approach grounded in local weak convergence analysis and a scalable mean-field policy optimization algorithm. Results: Experiments on synthetic and real-world sparse networks demonstrate that our method significantly outperforms state-of-the-art graphon/graphex mean-field approaches, improving policy performance by 23%–41% under skewed degree distributions and extreme sparsity—thereby bridging critical theoretical and practical gaps in MARL mean-field modeling for realistic sparse structures.

Technology Category

Application Category

📝 Abstract
Large agent networks are abundant in applications and nature and pose difficult challenges in the field of multi-agent reinforcement learning (MARL) due to their computational and theoretical complexity. While graphon mean field games and their extensions provide efficient learning algorithms for dense and moderately sparse agent networks, the case of realistic sparser graphs remains largely unsolved. Thus, we propose a novel mean field control model inspired by local weak convergence to include sparse graphs such as power law networks with coefficients above two. Besides a theoretical analysis, we design scalable learning algorithms which apply to the challenging class of graph sequences with finite first moment. We compare our model and algorithms for various examples on synthetic and real world networks with mean field algorithms based on Lp graphons and graphexes. As it turns out, our approach outperforms existing methods in many examples and on various networks due to the special design aiming at an important, but so far hard to solve class of MARL problems.
Problem

Research questions and friction points this paper is trying to address.

Multi-Agent Reinforcement Learning
Large-Scale Distributed Networks
Effective Average Control
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mathematical Theory
Power Law Networks
Multi-Agent Reinforcement Learning
🔎 Similar Papers
No similar papers found.