Modelling Mean-Field Games with Neural Ordinary Differential Equations

📅 2025-04-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional PDE-based approaches to mean-field games (MFGs) suffer from challenges in guaranteeing solution existence/uniqueness, substantial modeling bias, and limited applicability to partially observable or noisy settings. To address these limitations, this work proposes the first end-to-end, data-driven MFG framework integrating neural ordinary differential equations (Neural ODEs). By replacing analytical PDE solving with automatic differentiation, our method enables numerically robust learning of equilibrium policy distributions without relying on explicit PDE solvers—achieving both model compactness and theoretical existence guarantees. Evaluated across three benchmark MFG tasks of increasing complexity, the approach accurately recovers population-level policy distributions from minimal observational data, demonstrating superior generalization and sample efficiency. This paradigm shift advances MFG modeling in complex, non-ideal environments where classical assumptions fail.

Technology Category

Application Category

📝 Abstract
Mean-field game theory relies on approximating games that would otherwise have been intractable to model. While the games can be solved analytically via the associated system of partial derivatives, this approach is not model-free, can lead to the loss of the existence or uniqueness of solutions and may suffer from modelling bias. To reduce the dependency between the model and the game, we combine mean-field game theory with deep learning in the form of neural ordinary differential equations. The resulting model is data-driven, lightweight and can learn extensive strategic interactions that are hard to capture using mean-field theory alone. In addition, the model is based on automatic differentiation, making it more robust and objective than approaches based on finite differences. We highlight the efficiency and flexibility of our approach by solving three mean-field games that vary in their complexity, observability and the presence of noise. Using these results, we show that the model is flexible, lightweight and requires few observations to learn the distribution underlying the data.
Problem

Research questions and friction points this paper is trying to address.

Model intractable mean-field games using neural ODEs
Reduce model-game dependency with data-driven deep learning
Learn complex strategic interactions robustly via automatic differentiation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines mean-field games with neural ODEs
Uses automatic differentiation for robustness
Data-driven model learns strategic interactions
🔎 Similar Papers
No similar papers found.