Mean-Field Control on Sparse Graphs: From Local Limits to GNNs via Neighborhood Distributions

๐Ÿ“… 2026-01-29
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the limitation of classical mean-field control, which relies on a fully connected assumption and thus struggles to apply to real-world multi-agent systems over sparse graphs. The authors propose a novel state representation based on rooted-neighborhood-augmented probability measures, enabling a mean-field control framework tailored for large-scale sparse graph-structured systems. By establishing a finite-horizon locality principle, they rigorously prove that optimal policies depend only on a finite-hop neighborhood, and leverage this insight to design a lifted-space dynamic programming algorithm operating on neighborhood distributions. This approach provides both theoretical grounding and a practical algorithmic foundation for applying graph neural networks (GNNs) in sparse multi-agent reinforcement learning, demonstrating high efficiency and interpretability on complex topologies.

Technology Category

Application Category

๐Ÿ“ Abstract
Mean-field control (MFC) offers a scalable solution to the curse of dimensionality in multi-agent systems but traditionally hinges on the restrictive assumption of exchangeability via dense, all-to-all interactions. In this work, we bridge the gap to real-world network structures by proposing a rigorous framework for MFC on large sparse graphs. We redefine the system state as a probability measure over decorated rooted neighborhoods, effectively capturing local heterogeneity. Our central contribution is a theoretical foundation for scalable reinforcement learning in this setting. We prove horizon-dependent locality: for finite-horizon problems, an agent's optimal policy at time t depends strictly on its (T-t)-hop neighborhood. This result renders the infinite-dimensional control problem tractable and underpins a novel Dynamic Programming Principle (DPP) on the lifted space of neighborhood distributions. Furthermore, we formally and experimentally justify the use of Graph Neural Networks (GNNs) for actor-critic algorithms in this context. Our framework naturally recovers classical MFC as a degenerate case while enabling efficient, theoretically grounded control on complex sparse topologies.
Problem

Research questions and friction points this paper is trying to address.

Mean-field control
Sparse graphs
Multi-agent systems
Local heterogeneity
Scalable reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mean-field control
Sparse graphs
Neighborhood distributions
Dynamic Programming Principle
Graph Neural Networks
๐Ÿ”Ž Similar Papers
No similar papers found.
T
Tobias Schmidt
Department of Mathematics, TU Darmstadt, Darmstadt, Germany
Kai Cui
Kai Cui
Technische Universitรคt Darmstadt
Mean Field GamesReinforcement LearningLLM Inference