🤖 AI Summary
This work addresses the high computational overhead of Sharpness-Aware Minimization (SAM), which stems from its double-gradient computation. We propose Momentum-SAM (MSAM), the first method to replace SAM’s gradient-ascent perturbation with the Nesterov Accelerated Gradient (NAG) momentum direction—achieving SAM-level generalization without any additional gradient evaluations or memory overhead. MSAM decouples optimization trajectory and generalization enhancement: it requires only standard SGD/Adam momentum updates plus parameter-space perturbation, preserving baseline training speed and memory footprint. Theoretical analysis confirms its sharpness-reduction capability. Empirically, MSAM significantly outperforms baseline optimizers and matches SAM’s generalization performance across diverse vision and language benchmarks. By eliminating SAM’s computational bottleneck while retaining its benefits, MSAM substantially improves practicality in resource-constrained settings.
📝 Abstract
The recently proposed optimization algorithm for deep neural networks Sharpness Aware Minimization (SAM) suggests perturbing parameters before gradient calculation by a gradient ascent step to guide the optimization into parameter space regions of flat loss. While significant generalization improvements and thus reduction of overfitting could be demonstrated, the computational costs are doubled due to the additionally needed gradient calculation, making SAM unfeasible in case of limited computationally capacities. Motivated by Nesterov Accelerated Gradient (NAG) we propose Momentum-SAM (MSAM), which perturbs parameters in the direction of the accumulated momentum vector to achieve low sharpness without significant computational overhead or memory demands over SGD or Adam. We evaluate MSAM in detail and reveal insights on separable mechanisms of NAG, SAM and MSAM regarding training optimization and generalization. Code is available at https://github.com/MarlonBecker/MSAM.