Domain decomposition architectures and Gauss-Newton training for physics-informed neural networks

📅 2025-10-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Physics-informed neural networks (PINNs) suffer from slow training and poor convergence when solving boundary-value problems of partial differential equations (PDEs), primarily due to spectral bias. Method: We propose a novel framework integrating overlapping domain decomposition with Gauss–Newton optimization. It employs localized subdomain neural networks to alleviate bottlenecks in learning high-frequency solution components and designs a Gauss–Newton iterator exploiting the block-sparse structure of the residual Jacobian matrix, substantially reducing computational cost for Hessian approximation and linear system solves. Results: Experiments on canonical PDE benchmarks demonstrate 2–5× faster convergence and improved accuracy and numerical stability over first-order optimizers such as Adam. Contribution: This work is the first to synergistically combine overlapping domain decomposition with second-order optimization for PINNs, yielding a training paradigm that is both theoretically well-founded and computationally efficient.

Technology Category

Application Category

📝 Abstract
Approximating the solutions of boundary value problems governed by partial differential equations with neural networks is challenging, largely due to the difficult training process. This difficulty can be partly explained by the spectral bias, that is, the slower convergence of high-frequency components, and can be mitigated by localizing neural networks via (overlapping) domain decomposition. We combine this localization with the Gauss-Newton method as the optimizer to obtain faster convergence than gradient-based schemes such as Adam; this comes at the cost of solving an ill-conditioned linear system in each iteration. Domain decomposition induces a block-sparse structure in the otherwise dense Gauss-Newton system, reducing the computational cost per iteration. Our numerical results indicate that combining localization and Gauss-Newton optimization is promising for neural network-based solvers for partial differential equations.
Problem

Research questions and friction points this paper is trying to address.

Addressing slow convergence in physics-informed neural network training
Mitigating spectral bias through localized domain decomposition methods
Improving PDE solver efficiency with Gauss-Newton optimization techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Domain decomposition architectures localize neural networks
Gauss-Newton method replaces gradient-based optimization
Block-sparse structure reduces computational cost per iteration
🔎 Similar Papers
No similar papers found.