MF-OML: Online Mean-Field Reinforcement Learning with Occupation Measures for Large Population Games

๐Ÿ“… 2024-05-01
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 7
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Computing approximate Nash equilibria in large-population sequential symmetric games remains challenging: existing methods rely on restrictive structural assumptions (e.g., zero-sum or potential game structures), require access to simulators, or suffer from high computational complexity and lack of convergence guarantees. To address this, we propose MF-OMLโ€”the first simulator-free, structure-agnostic online mean-field reinforcement learning algorithm. Built upon occupation-measure modeling, MF-OML is the first method to achieve polynomial-time, globally convergent Nash equilibrium computation for general monotone mean-field games. Theoretically, under strong and weak Lasryโ€“Lions monotonicity, it attains high-probability regret bounds of $ ilde{O}(M^{3/4} + N^{-1/2}M)$ and $ ilde{O}(M^{11/12} + N^{-1/6}M)$, respectively, where $M$ is the number of episodes and $N$ the population size. Empirically, MF-OML establishes the first scalable, minimally assumptive, and provably convergent computational framework for mean-field games.

Technology Category

Application Category

๐Ÿ“ Abstract
Reinforcement learning for multi-agent games has attracted lots of attention recently. However, given the challenge of solving Nash equilibria for large population games, existing works with guaranteed polynomial complexities either focus on variants of zero-sum and potential games, or aim at solving (coarse) correlated equilibria, or require access to simulators, or rely on certain assumptions that are hard to verify. This work proposes MF-OML (Mean-Field Occupation-Measure Learning), an online mean-field reinforcement learning algorithm for computing approximate Nash equilibria of large population sequential symmetric games. MF-OML is the first fully polynomial multi-agent reinforcement learning algorithm for provably solving Nash equilibria (up to mean-field approximation gaps that vanish as the number of players $N$ goes to infinity) beyond variants of zero-sum and potential games. When evaluated by the cumulative deviation from Nash equilibria, the algorithm is shown to achieve a high probability regret bound of $ ilde{O}(M^{3/4}+N^{-1/2}M)$ for games with the strong Lasry-Lions monotonicity condition, and a regret bound of $ ilde{O}(M^{11/12}+N^{- 1/6}M)$ for games with only the Lasry-Lions monotonicity condition, where $M$ is the total number of episodes and $N$ is the number of agents of the game. As a byproduct, we also obtain the first tractable globally convergent computational algorithm for computing approximate Nash equilibria of monotone mean-field games.
Problem

Research questions and friction points this paper is trying to address.

Computing approximate Nash equilibria for large population sequential symmetric games
Overcoming limitations of existing methods with polynomial complexity guarantees
Addressing challenges in multi-agent reinforcement learning without restrictive assumptions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online mean-field reinforcement learning algorithm
Uses occupation measures for large populations
Computes approximate Nash equilibria efficiently
๐Ÿ”Ž Similar Papers
No similar papers found.