Improving Energy Natural Gradient Descent through Woodbury, Momentum, and Randomization

📅 2025-05-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational cost and limited practicality of Energy-based Natural Gradient Descent (ENGD) in Physics-Informed Neural Networks (PINNs), this work proposes a novel multi-level acceleration and stabilization framework. Specifically, we integrate the Woodbury matrix identity, momentum-based Subsampled Projected Incremental Natural Gradient Descent (SPINGD), and randomized low-rank batch processing into the ENGD paradigm. The resulting method preserves the original L² error while achieving a 75× speedup over standard ENGD. It significantly improves early-stage convergence for low-dimensional problems and effectively alleviates computational bottlenecks in large-scale PINN optimization. The core contribution lies in the principled fusion of three complementary acceleration techniques—matrix inversion acceleration, stochastic natural gradient approximation, and randomized low-rank computation—thereby jointly enhancing accuracy, training efficiency, and scalability.

Technology Category

Application Category

📝 Abstract
Natural gradient methods significantly accelerate the training of Physics-Informed Neural Networks (PINNs), but are often prohibitively costly. We introduce a suite of techniques to improve the accuracy and efficiency of energy natural gradient descent (ENGD) for PINNs. First, we leverage the Woodbury formula to dramatically reduce the computational complexity of ENGD. Second, we adapt the Subsampled Projected-Increment Natural Gradient Descent algorithm from the variational Monte Carlo literature to accelerate the convergence. Third, we explore the use of randomized algorithms to further reduce the computational cost in the case of large batch sizes. We find that randomization accelerates progress in the early stages of training for low-dimensional problems, and we identify key barriers to attaining acceleration in other scenarios. Our numerical experiments demonstrate that our methods outperform previous approaches, achieving the same $L^2$ error as the original ENGD up to $75 imes$ faster.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational cost of energy natural gradient descent for PINNs
Accelerating convergence using Woodbury formula and randomization techniques
Improving training efficiency while maintaining accuracy in low-dimensional problems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Woodbury formula reduces ENGD computational complexity
Subsampled algorithm accelerates convergence for PINNs
Randomized algorithms cut costs for large batches
🔎 Similar Papers
No similar papers found.