Momentum-SAM: Sharpness Aware Minimization without Computational Overhead

📅 2024-01-22
🏛️ arXiv.org
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational overhead of Sharpness-Aware Minimization (SAM), which stems from its double-gradient computation. We propose Momentum-SAM (MSAM), the first method to replace SAM’s gradient-ascent perturbation with the Nesterov Accelerated Gradient (NAG) momentum direction—achieving SAM-level generalization without any additional gradient evaluations or memory overhead. MSAM decouples optimization trajectory and generalization enhancement: it requires only standard SGD/Adam momentum updates plus parameter-space perturbation, preserving baseline training speed and memory footprint. Theoretical analysis confirms its sharpness-reduction capability. Empirically, MSAM significantly outperforms baseline optimizers and matches SAM’s generalization performance across diverse vision and language benchmarks. By eliminating SAM’s computational bottleneck while retaining its benefits, MSAM substantially improves practicality in resource-constrained settings.

Technology Category

Application Category

📝 Abstract
The recently proposed optimization algorithm for deep neural networks Sharpness Aware Minimization (SAM) suggests perturbing parameters before gradient calculation by a gradient ascent step to guide the optimization into parameter space regions of flat loss. While significant generalization improvements and thus reduction of overfitting could be demonstrated, the computational costs are doubled due to the additionally needed gradient calculation, making SAM unfeasible in case of limited computationally capacities. Motivated by Nesterov Accelerated Gradient (NAG) we propose Momentum-SAM (MSAM), which perturbs parameters in the direction of the accumulated momentum vector to achieve low sharpness without significant computational overhead or memory demands over SGD or Adam. We evaluate MSAM in detail and reveal insights on separable mechanisms of NAG, SAM and MSAM regarding training optimization and generalization. Code is available at https://github.com/MarlonBecker/MSAM.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational overhead in Sharpness Aware Minimization (SAM)
Achieving low sharpness without extra gradient calculations
Improving generalization without increasing memory demands
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses momentum vector for parameter perturbation
Reduces computational overhead compared to SAM
Combines NAG and SAM mechanisms effectively
🔎 Similar Papers
No similar papers found.
M
Marlon Becker
Department for Computer Science & Institute for Geoinformatics, University of Muenster, Germany
F
Frederick Altrock
Department for Computer Science & Institute for Geoinformatics, University of Muenster, Germany
Benjamin Risse
Benjamin Risse
Faculty of Mathematics & Computer Science, University of Münster, Germany
Computer VisionMachine LearningEcologyAdditive ManufacturingBiomedical Image Processing