Solving Continuous Mean Field Games: Deep Reinforcement Learning for Non-Stationary Dynamics

๐Ÿ“… 2025-10-25
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Solving non-stationary mean-field games (MFGs) with continuous state spaces remains challenging due to the difficulty of modeling time-varying population distributions and strategic interactions. To address this, we propose the first unified framework integrating fictitious play, deep reinforcement learning (DRL), and conditional normalizing flows. Specifically, we employ supervised learning to represent time-varying mean-field strategies, leverage DRL for efficient optimal-response computation, and explicitly model time-dependent population density dynamics via conditional normalizing flows. Experiments on three benchmark tasks with increasing complexity demonstrate that our method significantly improves both density approximation accuracy and strategy convergence speed. It is the first approach to enable efficient and stable equilibrium computation for non-stationary continuous-space MFGs, overcoming fundamental limitations of prior methods in dynamic environments and continuous domains.

Technology Category

Application Category

๐Ÿ“ Abstract
Mean field games (MFGs) have emerged as a powerful framework for modeling interactions in large-scale multi-agent systems. Despite recent advancements in reinforcement learning (RL) for MFGs, existing methods are typically limited to finite spaces or stationary models, hindering their applicability to real-world problems. This paper introduces a novel deep reinforcement learning (DRL) algorithm specifically designed for non-stationary continuous MFGs. The proposed approach builds upon a Fictitious Play (FP) methodology, leveraging DRL for best-response computation and supervised learning for average policy representation. Furthermore, it learns a representation of the time-dependent population distribution using a Conditional Normalizing Flow. To validate the effectiveness of our method, we evaluate it on three different examples of increasing complexity. By addressing critical limitations in scalability and density approximation, this work represents a significant advancement in applying DRL techniques to complex MFG problems, bringing the field closer to real-world multi-agent systems.
Problem

Research questions and friction points this paper is trying to address.

Solving continuous mean field games with non-stationary dynamics
Addressing scalability and density approximation limitations in MFGs
Developing DRL methods for complex real-world multi-agent systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep reinforcement learning for non-stationary continuous MFGs
Fictitious Play with DRL and supervised learning integration
Conditional Normalizing Flow for population distribution representation
๐Ÿ”Ž Similar Papers
No similar papers found.