Adam or Gauss-Newton? A Comparative Study In Terms of Basis Alignment and SGD Noise

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates the equivalence and performance differences between Adam and diagonal Gauss–Newton (GN) preconditioners in deep learning optimization. Methodologically, it combines theoretical analysis—under quadratic objectives, linear/logistic regression, and Gaussian data assumptions—with empirical validation. The contributions are threefold: (1) Under full-batch settings, Adam provably outperforms both the inverse GN preconditioner (GN⁻¹) and its square-root variant (GN⁻¹/²); (2) In stochastic mini-batch regimes, Adam dynamically approximates GN⁻¹/², with its behavior governed jointly by gradient noise characteristics and the choice of preconditioning basis; (3) It provides the first unified explanation of their divergent behaviors on convex and non-convex problems, grounded in noise modeling and basis alignment. These findings offer principled theoretical insights and practical guidance for optimizer design and selection in modern deep learning.

Technology Category

Application Category

📝 Abstract
Diagonal preconditioners are computationally feasible approximate to second-order optimizers, which have shown significant promise in accelerating training of deep learning models. Two predominant approaches are based on Adam and Gauss-Newton (GN) methods: the former leverages statistics of current gradients and is the de-factor optimizers for neural networks, and the latter uses the diagonal elements of the Gauss-Newton matrix and underpins some of the recent diagonal optimizers such as Sophia. In this work, we compare these two diagonal preconditioning methods through the lens of two key factors: the choice of basis in the preconditioner, and the impact of gradient noise from mini-batching. To gain insights, we analyze these optimizers on quadratic objectives and logistic regression under all four quadrants. We show that regardless of the basis, there exist instances where Adam outperforms both GN$^{-1}$ and GN$^{-1/2}$ in full-batch settings. Conversely, in the stochastic regime, Adam behaves similarly to GN$^{-1/2}$ for linear regression under a Gaussian data assumption. These theoretical results are supported by empirical studies on both convex and non-convex objectives.
Problem

Research questions and friction points this paper is trying to address.

Compares Adam and Gauss-Newton diagonal preconditioners for deep learning optimization
Analyzes basis alignment and SGD noise effects on optimizer performance
Evaluates theoretical and empirical performance across convex and non-convex objectives
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diagonal preconditioners approximate second-order optimizers
Compare Adam and Gauss-Newton diagonal preconditioning methods
Analyze basis choice and gradient noise impact
🔎 Similar Papers
No similar papers found.