Does SGD really happen in tiny subspaces?

📅 2024-05-25
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether SGD updates effectively occur within the low-rank subspace spanned by the top eigenvectors of the loss Hessian—and whether alignment between gradients and this subspace drives optimization. Using Hessian spectral analysis, gradient projection, and controlled ablation experiments, we systematically evaluate SGD, Adam, SAM, and momentum-based optimizers across diverse models and tasks, including Edge-of-Stability regimes. Contrary to prevailing assumptions, we find that high gradient–subspace alignment is deceptive: projecting gradients onto the dominant subspace fails to reduce loss, and removing over 90% of the top Hessian components degrades training performance negligibly. Crucially, we demonstrate for the first time that alignment with a low-dimensional Hessian subspace is not intrinsic to optimization; rather, gradient components orthogonal to this subspace are essential for convergence. These results fundamentally challenge the widely accepted “low-dimensional training” hypothesis in deep learning.

Technology Category

Application Category

📝 Abstract
Understanding the training dynamics of deep neural networks is challenging due to their high-dimensional nature and intricate loss landscapes. Recent studies have revealed that, along the training trajectory, the gradient approximately aligns with a low-rank top eigenspace of the training loss Hessian, referred to as the dominant subspace. Given this alignment, this paper explores whether neural networks can be trained within the dominant subspace, which, if feasible, could lead to more efficient training methods. Our primary observation is that when the SGD update is projected onto the dominant subspace, the training loss does not decrease further. This suggests that the observed alignment between the gradient and the dominant subspace is spurious. Surprisingly, projecting out the dominant subspace proves to be just as effective as the original update, despite removing the majority of the original update component. We observe similar behavior across practical setups, including the large learning rate regime (also known as Edge of Stability), Sharpness-Aware Minimization, momentum, and adaptive optimizers. We discuss the main causes and implications of this spurious alignment, shedding light on the dynamics of neural network training.
Problem

Research questions and friction points this paper is trying to address.

Explores if neural networks train in dominant subspaces.
Tests SGD effectiveness within low-rank eigenspaces.
Investigates spurious gradient-subspace alignment in training.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Explores training in dominant subspace
Projects SGD updates onto subspace
Tests across various optimization setups
🔎 Similar Papers
No similar papers found.