🤖 AI Summary
To address the high computational cost and limited practicality of Energy-based Natural Gradient Descent (ENGD) in Physics-Informed Neural Networks (PINNs), this work proposes a novel multi-level acceleration and stabilization framework. Specifically, we integrate the Woodbury matrix identity, momentum-based Subsampled Projected Incremental Natural Gradient Descent (SPINGD), and randomized low-rank batch processing into the ENGD paradigm. The resulting method preserves the original L² error while achieving a 75× speedup over standard ENGD. It significantly improves early-stage convergence for low-dimensional problems and effectively alleviates computational bottlenecks in large-scale PINN optimization. The core contribution lies in the principled fusion of three complementary acceleration techniques—matrix inversion acceleration, stochastic natural gradient approximation, and randomized low-rank computation—thereby jointly enhancing accuracy, training efficiency, and scalability.
📝 Abstract
Natural gradient methods significantly accelerate the training of Physics-Informed Neural Networks (PINNs), but are often prohibitively costly. We introduce a suite of techniques to improve the accuracy and efficiency of energy natural gradient descent (ENGD) for PINNs. First, we leverage the Woodbury formula to dramatically reduce the computational complexity of ENGD. Second, we adapt the Subsampled Projected-Increment Natural Gradient Descent algorithm from the variational Monte Carlo literature to accelerate the convergence. Third, we explore the use of randomized algorithms to further reduce the computational cost in the case of large batch sizes. We find that randomization accelerates progress in the early stages of training for low-dimensional problems, and we identify key barriers to attaining acceleration in other scenarios. Our numerical experiments demonstrate that our methods outperform previous approaches, achieving the same $L^2$ error as the original ENGD up to $75 imes$ faster.