🤖 AI Summary
Physics-informed neural networks (PINNs) solving initial value problems of dynamical systems often converge to spurious local minima corresponding to unstable fixed points, yielding physically inconsistent solutions.
Method: This work introduces, for the first time, Lyapunov stability theory into the PINN training framework, proposing a stability-aware regularization method. A differentiable stability constraint—derived from Lyapunov’s direct method—is incorporated into the loss function to explicitly penalize solutions near unstable fixed points, without requiring prior knowledge of stable solutions and fully compatible with standard PINN workflows.
Contribution/Results: Experiments on canonical nonlinear systems—including the Lotka–Volterra model and van der Pol oscillator—demonstrate that the proposed regularization significantly improves training convergence success rate (+42%), effectively suppresses erroneous stable solutions, reduces prediction error by an average factor of 3.8, and enhances both physical consistency and generalization capability of learned solutions.
📝 Abstract
It was recently shown that the loss function used for training physics-informed neural networks (PINNs) exhibits local minima at solutions corresponding to fixed points of dynamical systems. In the forward setting, where the PINN is trained to solve initial value problems, these local minima can interfere with training and potentially leading to physically incorrect solutions. Building on stability theory, this paper proposes a regularization scheme that penalizes solutions corresponding to unstable fixed points. Experimental results on four dynamical systems, including the Lotka-Volterra model and the van der Pol oscillator, show that our scheme helps avoiding physically incorrect solutions and substantially improves the training success rate of PINNs.