Simultaneously Solving Infinitely Many LQ Mean Field Games In Hilbert Spaces: The Power of Neural Operators

๐Ÿ“… 2025-10-22
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Conventional mean-field game (MFG) solvers operate instance-by-instance, rendering them inefficient for batched problems under perturbations in dynamics or cost functions. Method: For linear-quadratic (LQ) MFGs, we propose a neural-operator-based universal solver framework. We first establish a local Lipschitz estimate for the mapping from problem specifications (dynamics and costs) to equilibrium policies. Then, we prove a universal approximation theorem for neural operators with prescribed Lipschitz regularity and derive sample complexity bounds for Lipschitz learning in infinite-dimensional spaces. Finally, we design a controlled-Lipschitz deep operator network trained via stochastic sampling of problem specifications. Results: The method enables efficient generalization to unseen LQ-MFG instances and simultaneously solves infinitely many related problems within a separable Hilbert spaceโ€”bypassing sequential computation bottlenecks. It features controllable parameter complexity and strong theoretical guarantees, including explicit Lipschitz continuity and generalization bounds.

Technology Category

Application Category

๐Ÿ“ Abstract
Traditional mean-field game (MFG) solvers operate on an instance-by-instance basis, which becomes infeasible when many related problems must be solved (e.g., for seeking a robust description of the solution under perturbations of the dynamics or utilities, or in settings involving continuum-parameterized agents.). We overcome this by training neural operators (NOs) to learn the rules-to-equilibrium map from the problem data (``rules'': dynamics and cost functionals) of LQ MFGs defined on separable Hilbert spaces to the corresponding equilibrium strategy. Our main result is a statistical guarantee: an NO trained on a small number of randomly sampled rules reliably solves unseen LQ MFG variants, even in infinite-dimensional settings. The number of NO parameters needed remains controlled under appropriate rule sampling during training. Our guarantee follows from three results: (i) local-Lipschitz estimates for the highly nonlinear rules-to-equilibrium map; (ii) a universal approximation theorem using NOs with a prespecified Lipschitz regularity (unlike traditional NO results where the NO's Lipschitz constant can diverge as the approximation error vanishes); and (iii) new sample-complexity bounds for $L$-Lipschitz learners in infinite dimensions, directly applicable as the Lipschitz constants of our approximating NOs are controlled in (ii).
Problem

Research questions and friction points this paper is trying to address.

Solving infinitely many LQ mean field games in Hilbert spaces
Learning rules-to-equilibrium map using neural operators
Providing statistical guarantees for solving unseen game variants
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural operators learn rules-to-equilibrium map for LQ MFGs
Statistical guarantees for solving unseen infinite-dimensional variants
Controlled parameters via Lipschitz-regularized neural operator approximation
๐Ÿ”Ž Similar Papers
No similar papers found.