A Unified Stability Analysis of SAM vs SGD: Role of Data Coherence and Emergence of Simplicity Bias

📅 2025-11-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the theoretical gap between data structure, optimization dynamics, and solution properties in overparameterized deep learning—specifically, why SGD and its variants (e.g., SAM) prefer flat or simple minima and thereby improve generalization. Method: We propose a unified analytical framework based on linear stability theory and introduce “data coherence”, a geometric measure quantifying alignment between gradients and curvature. We rigorously model and compare the dynamical stability of SGD, stochastic perturbations, and SAM in two-layer ReLU networks. Contribution/Results: We prove that high-data-coherence induces more stable and structurally simpler minima; SAM intrinsically enhances generalization by selectively stabilizing flat directions in parameter space. This is the first work to jointly model data geometry, optimization stability, and generalization bias within a single principled framework—establishing a new theoretical foundation for algorithmic simplicity bias in deep learning.

Technology Category

Application Category

📝 Abstract
Understanding the dynamics of optimization in deep learning is increasingly important as models scale. While stochastic gradient descent (SGD) and its variants reliably find solutions that generalize well, the mechanisms driving this generalization remain unclear. Notably, these algorithms often prefer flatter or simpler minima, particularly in overparameterized settings. Prior work has linked flatness to generalization, and methods like Sharpness-Aware Minimization (SAM) explicitly encourage flatness, but a unified theory connecting data structure, optimization dynamics, and the nature of learned solutions is still lacking. In this work, we develop a linear stability framework that analyzes the behavior of SGD, random perturbations, and SAM, particularly in two layer ReLU networks. Central to our analysis is a coherence measure that quantifies how gradient curvature aligns across data points, revealing why certain minima are stable and favored during training.
Problem

Research questions and friction points this paper is trying to address.

Analyzing optimization dynamics in deep learning models
Understanding generalization mechanisms of SGD and SAM algorithms
Developing coherence measure linking data structure to solution stability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Linear stability framework analyzes SGD and SAM
Coherence measure quantifies gradient curvature alignment
Two layer ReLU networks reveal training stability mechanisms
🔎 Similar Papers
No similar papers found.