Bench-MFG: A Benchmark Suite for Learning in Stationary Mean Field Games

๐Ÿ“… 2026-02-13
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF

Technology Category

Application Category

๐Ÿ“ Abstract
The intersection of Mean Field Games (MFGs) and Reinforcement Learning (RL) has fostered a growing family of algorithms designed to solve large-scale multi-agent systems. However, the field currently lacks a standardized evaluation protocol, forcing researchers to rely on bespoke, isolated, and often simplistic environments. This fragmentation makes it difficult to assess the robustness, generalization, and failure modes of emerging methods. To address this gap, we propose a comprehensive benchmark suite for MFGs (Bench-MFG), focusing on the discrete-time, discrete-space, stationary setting for the sake of clarity. We introduce a taxonomy of problem classes, ranging from no-interaction and monotone games to potential and dynamics-coupled games, and provide prototypical environments for each. Furthermore, we propose MF-Garnets, a method for generating random MFG instances to facilitate rigorous statistical testing. We benchmark a variety of learning algorithms across these environments, including a novel black-box approach (MF-PSO) for exploitability minimization. Based on our extensive empirical results, we propose guidelines to standardize future experimental comparisons. Code available at \href{https://github.com/lorenzomagnino/Bench-MFG}{https://github.com/lorenzomagnino/Bench-MFG}.
Problem

Research questions and friction points this paper is trying to address.

Mean Field Games
Reinforcement Learning
Benchmarking
Multi-agent Systems
Standardized Evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mean Field Games
Reinforcement Learning
Benchmark Suite
MF-Garnets
Exploitability Minimization
๐Ÿ”Ž Similar Papers
No similar papers found.