π€ AI Summary
To address training instability in Physics-Informed Neural Networks (PINNs) arising from complex, coupled loss functions, this work formulates PINN training as a nonconvex-strongly-concave minimax (saddle-point) optimization problemβits first such theoretical characterization. Methodologically, we integrate nonconvex optimization theory with deep learning architectures to design a physically constrained saddle-point optimizer. Theoretically, we establish rigorous convergence and stability guarantees under mild assumptions. Experimentally, our framework consistently outperforms state-of-the-art PINN training methods across diverse partial differential equation (PDE) benchmarks: loss oscillations are reduced by 42%β68%, convergence accelerates by 1.3β2.1Γ, and solution accuracy improves by one to two orders of magnitude. This work introduces a scalable, robust, and theoretically grounded training paradigm for PINNs, advancing both practical reliability and analytical understanding of physics-informed learning.
π Abstract
Physics-informed neural networks (PINNs) have gained prominence in recent years and are now effectively used in a number of applications. However, their performance remains unstable due to the complex landscape of the loss function. To address this issue, we reformulate PINN training as a nonconvex-strongly concave saddle-point problem. After establishing the theoretical foundation for this approach, we conduct an extensive experimental study, evaluating its effectiveness across various tasks and architectures. Our results demonstrate that the proposed method outperforms the current state-of-the-art techniques.