Solving nonconvex Hamilton--Jacobi--Isaacs equations with PINN-based policy iteration

📅 2025-07-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the computational challenge of solving high-dimensional nonconvex Hamilton–Jacobi–Isaacs (HJI) partial differential equations arising in stochastic differential games and robust control. We propose a mesh-free numerical method that integrates physics-informed neural networks (PINNs) with policy iteration, eliminating the conventional convexity assumption on the Hamiltonian. Leveraging dynamic programming principles, the algorithm alternates between pointwise min–max optimization to update feedback policies and solving linearized PDEs via PINNs, with gradients computed efficiently through automatic differentiation. We establish theoretical local uniform convergence of the iterative sequence to the viscosity solution. Numerical experiments across 2D–10D benchmark problems demonstrate the method’s efficacy: relative $L^2$ errors remain below $10^{-2}$, and accuracy and scalability substantially outperform standard PINN solvers.

Technology Category

Application Category

📝 Abstract
We propose a mesh-free policy iteration framework that combines classical dynamic programming with physics-informed neural networks (PINNs) to solve high-dimensional, nonconvex Hamilton--Jacobi--Isaacs (HJI) equations arising in stochastic differential games and robust control. The method alternates between solving linear second-order PDEs under fixed feedback policies and updating the controls via pointwise minimax optimization using automatic differentiation. Under standard Lipschitz and uniform ellipticity assumptions, we prove that the value function iterates converge locally uniformly to the unique viscosity solution of the HJI equation. The analysis establishes equi-Lipschitz regularity of the iterates, enabling provable stability and convergence without requiring convexity of the Hamiltonian. Numerical experiments demonstrate the accuracy and scalability of the method. In a two-dimensional stochastic path-planning game with a moving obstacle, our method matches finite-difference benchmarks with relative $L^2$-errors below %10^{-2}%. In five- and ten-dimensional publisher-subscriber differential games with anisotropic noise, the proposed approach consistently outperforms direct PINN solvers, yielding smoother value functions and lower residuals. Our results suggest that integrating PINNs with policy iteration is a practical and theoretically grounded method for solving high-dimensional, nonconvex HJI equations, with potential applications in robotics, finance, and multi-agent reinforcement learning.
Problem

Research questions and friction points this paper is trying to address.

Solving high-dimensional nonconvex HJI equations
Combining PINNs with policy iteration
Proving convergence without Hamiltonian convexity
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mesh-free policy iteration with PINNs
Alternates PDE solving and minimax optimization
Proves convergence for nonconvex HJI equations
🔎 Similar Papers
No similar papers found.