Last Iterate Convergence in Monotone Mean Field Games

📅 2024-10-07
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing algorithms for monotone mean-field games (MFGs) lack theoretical guarantees for last-iterate convergence to equilibria under the Lasry–Lions monotonicity condition. Method: We propose the Regularized Approximate Proximal (APP) algorithm—a novel mirror-descent-type method integrating proximal-point ideas and monotone operator theory—achieving efficient, implementable exponential-rate last-iterate convergence without requiring step-size decay or time averaging. Contribution/Results: This work establishes the first rigorous exponential last-iterate convergence theory for monotone MFGs under Lasry–Lions monotonicity. The APP algorithm significantly improves computational efficiency and numerical stability over prior approaches. Extensive numerical experiments confirm its high accuracy and rapid convergence across diverse monotone MFG instances. Our result fills a critical theoretical gap and advances the practical deployment of MFGs in multi-agent learning.

Technology Category

Application Category

📝 Abstract
Mean Field Game (MFG) is a framework for modeling and approximating the behavior of large numbers of agents. Computing equilibria in MFG has been of interest in multi-agent reinforcement learning. The theoretical guarantee that the last updated policy converges to an equilibrium has been limited. We propose the use of a simple, proximal-point (PP) type method to compute equilibria for MFGs. We then provide the first last-iterate convergence (LIC) guarantee under the Lasry--Lions-type monotonicity condition. We also propose an approximation of the update rule of PP ($mathtt{APP}$) based on the observation that it is equivalent to solving the regularized MFG, which can be solved by mirror descent. We further establish that the regularized mirror descent achieves LIC at an exponential rate. Our numerical experiment demonstrates that $mathtt{APP}$ efficiently computes the equilibrium.
Problem

Research questions and friction points this paper is trying to address.

Monotone Mean Field Games
Equilibrium Stability
Multi-Player Learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Monotone Mean Field Games
APP Methodology
Mirror Descent Algorithm
🔎 Similar Papers
No similar papers found.