Gradient Regularized Natural Gradients

📅 2026-01-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the poor stability, slow convergence, and weak generalization of natural gradient methods in large-scale deep learning by proposing Gradient-Regularized Natural Gradient (GRNG), which for the first time integrates explicit gradient regularization into the natural gradient update. By leveraging a structured Fisher approximation and regularized Kalman filtering, GRNG establishes an efficient and scalable second-order optimization framework that avoids explicit inversion of the Fisher information matrix while providing theoretical convergence guarantees. Experimental results demonstrate that GRNG consistently outperforms mainstream optimizers—including SGD, AdamW, K-FAC, and Sophia—in both optimization speed and generalization performance across vision and language tasks.

Technology Category

Application Category

📝 Abstract
Gradient regularization (GR) has been shown to improve the generalizability of trained models. While Natural Gradient Descent has been shown to accelerate optimization in the initial phase of training, little attention has been paid to how the training dynamics of second-order optimizers can benefit from GR. In this work, we propose Gradient-Regularized Natural Gradients (GRNG), a family of scalable second-order optimizers that integrate explicit gradient regularization with natural gradient updates. Our framework provides two complementary algorithms: a frequentist variant that avoids explicit inversion of the Fisher Information Matrix (FIM) via structured approximations, and a Bayesian variant based on a Regularized-Kalman formulation that eliminates the need for FIM inversion entirely. We establish convergence guarantees for GRNG, showing that gradient regularization improves stability and enables convergence to global minima. Empirically, we demonstrate that GRNG consistently enhances both optimization speed and generalization compared to first-order methods (SGD, AdamW) and second-order baselines (K-FAC, Sophia), with strong results on vision and language benchmarks. Our findings highlight gradient regularization as a principled and practical tool to unlock the robustness of natural gradient methods for large-scale deep learning.
Problem

Research questions and friction points this paper is trying to address.

Gradient Regularization
Natural Gradient Descent
Second-order Optimization
Generalization
Training Stability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gradient Regularization
Natural Gradient Descent
Fisher Information Matrix
Second-order Optimization
Regularized Kalman Filter
🔎 Similar Papers
No similar papers found.