Flatness-Aware Stochastic Gradient Langevin Dynamics

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deep learning generalization is known to correlate with flat minima of the loss landscape, yet conventional Stochastic Gradient Langevin Dynamics (SGLD) lacks an explicit bias toward such solutions. To address this, we propose flatness-aware SGLD (fSGLD), which augments SGLD with isotropic Gaussian weight perturbations. We establish, for the first time, non-asymptotic theoretical guarantees: the invariant measure of fSGLD converges to global minimizers of a Hessian-trace-regularized objective, yielding both convergence rate bounds and excess risk bounds—without additional gradient evaluations. The perturbation implicitly captures local geometric structure via randomized smoothing. Empirically, fSGLD achieves superior or competitive generalization over strong baselines (e.g., SAM) on noisy-label and large-scale vision benchmarks, while incurring only ~50% of SAM’s training cost. Further analysis confirms its improved convergence to flatter minima.

Technology Category

Application Category

📝 Abstract
Generalization in deep learning is closely tied to the pursuit of flat minima in the loss landscape, yet classical Stochastic Gradient Langevin Dynamics (SGLD) offers no mechanism to bias its dynamics toward such low-curvature solutions. This work introduces Flatness-Aware Stochastic Gradient Langevin Dynamics (fSGLD), designed to efficiently and provably seek flat minima in high-dimensional nonconvex optimization problems. At each iteration, fSGLD uses the stochastic gradient evaluated at parameters perturbed by isotropic Gaussian noise, commonly referred to as Random Weight Perturbation (RWP), thereby optimizing a randomized-smoothing objective that implicitly captures curvature information. Leveraging these properties, we prove that the invariant measure of fSGLD stays close to a stationary measure concentrated on the global minimizers of a loss function regularized by the Hessian trace whenever the inverse temperature and the scale of random weight perturbation are properly coupled. This result provides a rigorous theoretical explanation for the benefits of random weight perturbation. In particular, we establish non-asymptotic convergence guarantees in Wasserstein distance with the best known rate and derive an excess-risk bound for the Hessian-trace regularized objective. Extensive experiments on noisy-label and large-scale vision tasks, in both training-from-scratch and fine-tuning settings, demonstrate that fSGLD achieves superior or comparable generalization and robustness to baseline algorithms while maintaining the computational cost of SGD, about half that of SAM. Hessian-spectrum analysis further confirms that fSGLD converges to significantly flatter minima.
Problem

Research questions and friction points this paper is trying to address.

Seeking flat minima in high-dimensional nonconvex optimization problems
Providing theoretical guarantees for random weight perturbation benefits
Achieving superior generalization with computational efficiency of SGD
Innovation

Methods, ideas, or system contributions that make the work stand out.

fSGLD uses random weight perturbation for flat minima
Optimizes randomized-smoothing objective with curvature information
Proves convergence to flat minima with theoretical guarantees
🔎 Similar Papers
No similar papers found.
S
Stefano Bruno
University of Edinburgh, United Kingdom
Y
Youngsik Hwang
Ulsan National Institute of Science and Technology, Republic of Korea
J
Jaehyeon An
National Technical University of Athens, Greece
Sotirios Sabanis
Sotirios Sabanis
Professor, University of Edinburgh & National Technical University of Athens
Stochastic AnalysisNumericsMathematical FinanceComputational StatisticsData Science
D
Dong-Young Lim
Ulsan National Institute of Science and Technology, Republic of Korea