Reinforcement Learning for Finite Space Mean-Field Type Games

📅 2024-09-25
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the computational intractability and poor scalability of Nash equilibrium computation for mean-field games (MFTGs) with finite state spaces in large-scale coalition games, this paper proposes the first reinforcement learning framework for MFTGs that simultaneously provides theoretical guarantees and high-dimensional scalability. We establish a rigorous theoretical connection between MFTG solutions and approximate Nash equilibria in finite-size coalition games. Methodologically, we introduce a dual-path algorithm: (i) “quantization + Nash Q-learning”, ensuring convergence under mild conditions, and (ii) a deep reinforcement learning variant, enabling adaptation to high-dimensional mean-field distributions. By integrating mean-field space quantization, stability analysis, and extensive evaluation across five benchmark environments, our approach successfully handles mean-field distributions of dimension up to 200—surpassing existing methods in both computational efficiency and scalability.

Technology Category

Application Category

📝 Abstract
Mean field type games (MFTGs) describe Nash equilibria between large coalitions: each coalition consists of a continuum of cooperative agents who maximize the average reward of their coalition while interacting non-cooperatively with a finite number of other coalitions. Although the theory has been extensively developed, we are still lacking efficient and scalable computational methods. Here, we develop reinforcement learning methods for such games in a finite space setting with general dynamics and reward functions. We start by proving that MFTG solution yields approximate Nash equilibria in finite-size coalition games. We then propose two algorithms. The first is based on quantization of mean-field spaces and Nash Q-learning. We provide convergence and stability analysis. We then propose a deep reinforcement learning algorithm, which can scale to larger spaces. Numerical experiments in 5 environments with mean-field distributions of dimension up to $200$ show the scalability and efficiency of the proposed method.
Problem

Research questions and friction points this paper is trying to address.

Develop RL methods for finite space mean-field games
Prove MFTG solutions yield approximate Nash equilibria
Scale algorithms to high-dimensional mean-field distributions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantization of mean-field spaces with Nash Q-learning
Deep reinforcement learning for larger spaces
Scalable algorithms for finite space MFTGs
🔎 Similar Papers
K
Kai Shao
Shanghai Frontiers Science Center of Artificial Intelligence and Deep Learning; NYU Shanghai, Shanghai, 200126, People’s Republic of China
Jiacheng Shen
Jiacheng Shen
Assistant Professor, Duke Kunshan University
Distributed SystemRDMA
C
Chijie An
NYU Shanghai, Shanghai, 200126, People’s Republic of China
Mathieu Laurière
Mathieu Laurière
Assistant professor of Mathematics and Data Science, NYU Shanghai
mean field gamesnumerical methodspartial differential equationsstochastic analysismachine learning