Deep Reinforcement Learning for Infinite Horizon Mean Field Problems in Continuous Spaces

📅 2023-09-19
🏛️ Journal of Machine Learning
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses average-field games (MFGs), mean-field control (MFC), and hybrid MFC–game (MFCG) problems in continuous state spaces over infinite horizons. We propose the first unified actor–critic framework for solving all three problem classes. Our method parameterizes the score function—i.e., the gradient of the log-density—to implicitly represent the mean-field distribution, and couples it with Langevin dynamics for online sampling and distributional updates. Crucially, the solution objective—whether an MFG equilibrium, an MFC optimal policy, or a hybrid MFCG solution—is adaptively selected solely by tuning the learning rate. By bypassing explicit density modeling, our approach improves accuracy in distribution evolution and enhances policy–distribution coordination efficiency. On linear–quadratic benchmarks, the algorithm demonstrates stable convergence; both theoretical convergence guarantees and numerical robustness are rigorously validated.
📝 Abstract
We present the development and analysis of a reinforcement learning (RL) algorithm designed to solve continuous-space mean field game (MFG) and mean field control (MFC) problems in a unified manner. The proposed approach pairs the actor-critic (AC) paradigm with a representation of the mean field distribution via a parameterized score function, which can be efficiently updated in an online fashion, and uses Langevin dynamics to obtain samples from the resulting distribution. The AC agent and the score function are updated iteratively to converge, either to the MFG equilibrium or the MFC optimum for a given mean field problem, depending on the choice of learning rates. A straightforward modification of the algorithm allows us to solve mixed mean field control games (MFCGs). The performance of our algorithm is evaluated using linear-quadratic benchmarks in the asymptotic infinite horizon framework.
Problem

Research questions and friction points this paper is trying to address.

Solves continuous-space mean field game and control problems.
Uses actor-critic paradigm with parameterized score function.
Evaluates algorithm with linear-quadratic benchmarks in infinite horizon.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified RL algorithm for MFG and MFC problems
Actor-critic paradigm with parameterized score function
Langevin dynamics for efficient distribution sampling
🔎 Similar Papers
No similar papers found.
A
Andrea Angiuli
Prime Machine Learning Team, Amazon
J
J. Fouque
Department of Statistics and Applied Probability, University of California, Santa Barbara
Ruimeng Hu
Ruimeng Hu
Associate Professor, University of California, Santa Barbara
Financial MathematicsDeep Learning
A
Alan Raydan
Department of Mathematics, University of California, Santa Barbara