🤖 AI Summary
Physics-informed neural networks (PINNs) suffer from slow training and poor convergence when solving boundary-value problems of partial differential equations (PDEs), primarily due to spectral bias. Method: We propose a novel framework integrating overlapping domain decomposition with Gauss–Newton optimization. It employs localized subdomain neural networks to alleviate bottlenecks in learning high-frequency solution components and designs a Gauss–Newton iterator exploiting the block-sparse structure of the residual Jacobian matrix, substantially reducing computational cost for Hessian approximation and linear system solves. Results: Experiments on canonical PDE benchmarks demonstrate 2–5× faster convergence and improved accuracy and numerical stability over first-order optimizers such as Adam. Contribution: This work is the first to synergistically combine overlapping domain decomposition with second-order optimization for PINNs, yielding a training paradigm that is both theoretically well-founded and computationally efficient.
📝 Abstract
Approximating the solutions of boundary value problems governed by partial differential equations with neural networks is challenging, largely due to the difficult training process. This difficulty can be partly explained by the spectral bias, that is, the slower convergence of high-frequency components, and can be mitigated by localizing neural networks via (overlapping) domain decomposition. We combine this localization with the Gauss-Newton method as the optimizer to obtain faster convergence than gradient-based schemes such as Adam; this comes at the cost of solving an ill-conditioned linear system in each iteration. Domain decomposition induces a block-sparse structure in the otherwise dense Gauss-Newton system, reducing the computational cost per iteration. Our numerical results indicate that combining localization and Gauss-Newton optimization is promising for neural network-based solvers for partial differential equations.