Gaussian Process Policy Iteration with Additive Schwarz Acceleration for Forward and Inverse HJB and Mean Field Game Problems

📅 2025-05-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the forward solution and inverse inference of Hamilton–Jacobi–Bellman (HJB) equations and mean-field games (MFGs). We propose the first unified method embedding Gaussian processes (GPs) into a policy iteration framework. Our approach constructs an explicit closed-form policy evaluation, circumventing traditional numerical optimization; it further introduces an additive Schwarz domain decomposition preconditioning technique to accelerate alternating optimization, significantly improving convergence speed. This is the first method enabling GP-driven analytical policy evaluation and supporting joint forward/inverse solving of HJB and MFG problems. Extensive validation on multiple benchmark problems demonstrates that the Schwarz acceleration reduces iteration counts by approximately 40%, while simultaneously enhancing computational robustness and efficiency.

Technology Category

Application Category

📝 Abstract
We propose a Gaussian Process (GP)-based policy iteration framework for addressing both forward and inverse problems in Hamilton--Jacobi--Bellman (HJB) equations and mean field games (MFGs). Policy iteration is formulated as an alternating procedure between solving the value function under a fixed control policy and updating the policy based on the resulting value function. By exploiting the linear structure of GPs for function approximation, each policy evaluation step admits an explicit closed-form solution, eliminating the need for numerical optimization. To improve convergence, we incorporate the additive Schwarz acceleration as a preconditioning step following each policy update. Numerical experiments demonstrate the effectiveness of Schwarz acceleration in improving computational efficiency.
Problem

Research questions and friction points this paper is trying to address.

Solving forward and inverse HJB and MFG problems
Using GP-based policy iteration for efficiency
Incorporating Schwarz acceleration to enhance convergence
Innovation

Methods, ideas, or system contributions that make the work stand out.

GP-based policy iteration for HJB and MFGs
Closed-form solution via linear GP approximation
Additive Schwarz acceleration enhances convergence
🔎 Similar Papers
No similar papers found.
Xianjin Yang
Xianjin Yang
California Institute of Technology
Partial Differential EquationsMean Field GamesOptimizationGaussian Processes
J
Jingguo Zhang
Department of Mathematics and Risk Management Institute, National University of Singapore, Singapore.