🤖 AI Summary
SAM suffers from unstable generalization due to its reliance on manually tuned perturbation radius ρ and learning rate η, and its convergence is constrained by problem-dependent parameters.
Method: We propose AdaSAM—the first parameter-agnostic adaptive SAM framework—that theoretically guarantees convergence for *any* ρ and η, thereby lifting prior theoretical restrictions. AdaSAM innovatively decouples perturbation and update steps: it employs AdaGrad-Norm to generate robust weight perturbations, while separately applying AdaGrad for gradient accumulation and Adam for model updates—replacing SAM’s original SGD-based optimization.
Contribution/Results: Extensive experiments across image classification and language modeling demonstrate that AdaSAM significantly improves generalization robustness, exhibits insensitivity to hyperparameter variations, and maintains efficient convergence. Both theoretical analysis and empirical validation confirm AdaSAM’s strong robustness and broad applicability across diverse tasks and architectures.
📝 Abstract
Sharpness-Aware Minimization (SAM) optimizer enhances the generalization ability of the machine learning model by exploring the flat minima landscape through weight perturbations. Despite its empirical success, SAM introduces an additional hyper-parameter, the perturbation radius, which causes the sensitivity of SAM to it. Moreover, it has been proved that the perturbation radius and learning rate of SAM are constrained by problem-dependent parameters to guarantee convergence. These limitations indicate the requirement of parameter-tuning in practical applications. In this paper, we propose the algorithm LightSAM which sets the perturbation radius and learning rate of SAM adaptively, thus extending the application scope of SAM. LightSAM employs three popular adaptive optimizers, including AdaGrad-Norm, AdaGrad and Adam, to replace the SGD optimizer for weight perturbation and model updating, reducing sensitivity to parameters. Theoretical results show that under weak assumptions, LightSAM could converge ideally with any choices of perturbation radius and learning rate, thus achieving parameter-agnostic. We conduct preliminary experiments on several deep learning tasks, which together with the theoretical findings validate the the effectiveness of LightSAM.