🤖 AI Summary
To address the low training efficiency and poor accuracy of Physics-Informed Neural Networks (PINNs), this paper proposes the first natural gradient optimization framework tailored to the model manifold. Methodologically, it establishes— for the first time—a rigorous mathematical connection between PINNs and Green’s function theory, yielding a PDE-driven geometric interpretation of the natural gradient. It further introduces a scalable natural gradient algorithm whose computational complexity is reduced to min(P²S, S²P), where P denotes the number of parameters and S the number of collocation points. By integrating differential-geometric modeling with PINN reconstruction, the framework enables efficient data assimilation and high-fidelity PDE solution. Experiments demonstrate substantially accelerated convergence and improved solution accuracy, while preserving theoretical rigor and computational tractability.
📝 Abstract
In the recent years, Physics Informed Neural Networks (PINNs) have received strong interest as a method to solve PDE driven systems, in particular for data assimilation purpose. This method is still in its infancy, with many shortcomings and failures that remain not properly understood. In this paper we propose a natural gradient approach to PINNs which contributes to speed-up and improve the accuracy of the training. Based on an in depth analysis of the differential geometric structures of the problem, we come up with two distinct contributions: (i) a new natural gradient algorithm that scales as $min(P^2S, S^2P)$, where $P$ is the number of parameters, and $S$ the batch size; (ii) a mathematically principled reformulation of the PINNs problem that allows the extension of natural gradient to it, with proved connections to Green's function theory.