๐ค AI Summary
Conventional mean-field game (MFG) solvers operate instance-by-instance, rendering them inefficient for batched problems under perturbations in dynamics or cost functions.
Method: For linear-quadratic (LQ) MFGs, we propose a neural-operator-based universal solver framework. We first establish a local Lipschitz estimate for the mapping from problem specifications (dynamics and costs) to equilibrium policies. Then, we prove a universal approximation theorem for neural operators with prescribed Lipschitz regularity and derive sample complexity bounds for Lipschitz learning in infinite-dimensional spaces. Finally, we design a controlled-Lipschitz deep operator network trained via stochastic sampling of problem specifications.
Results: The method enables efficient generalization to unseen LQ-MFG instances and simultaneously solves infinitely many related problems within a separable Hilbert spaceโbypassing sequential computation bottlenecks. It features controllable parameter complexity and strong theoretical guarantees, including explicit Lipschitz continuity and generalization bounds.
๐ Abstract
Traditional mean-field game (MFG) solvers operate on an instance-by-instance basis, which becomes infeasible when many related problems must be solved (e.g., for seeking a robust description of the solution under perturbations of the dynamics or utilities, or in settings involving continuum-parameterized agents.). We overcome this by training neural operators (NOs) to learn the rules-to-equilibrium map from the problem data (``rules'': dynamics and cost functionals) of LQ MFGs defined on separable Hilbert spaces to the corresponding equilibrium strategy. Our main result is a statistical guarantee: an NO trained on a small number of randomly sampled rules reliably solves unseen LQ MFG variants, even in infinite-dimensional settings. The number of NO parameters needed remains controlled under appropriate rule sampling during training.
Our guarantee follows from three results: (i) local-Lipschitz estimates for the highly nonlinear rules-to-equilibrium map; (ii) a universal approximation theorem using NOs with a prespecified Lipschitz regularity (unlike traditional NO results where the NO's Lipschitz constant can diverge as the approximation error vanishes); and (iii) new sample-complexity bounds for $L$-Lipschitz learners in infinite dimensions, directly applicable as the Lipschitz constants of our approximating NOs are controlled in (ii).