LightSAM: Parameter-Agnostic Sharpness-Aware Minimization

📅 2025-05-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
SAM suffers from unstable generalization due to its reliance on manually tuned perturbation radius ρ and learning rate η, and its convergence is constrained by problem-dependent parameters. Method: We propose AdaSAM—the first parameter-agnostic adaptive SAM framework—that theoretically guarantees convergence for *any* ρ and η, thereby lifting prior theoretical restrictions. AdaSAM innovatively decouples perturbation and update steps: it employs AdaGrad-Norm to generate robust weight perturbations, while separately applying AdaGrad for gradient accumulation and Adam for model updates—replacing SAM’s original SGD-based optimization. Contribution/Results: Extensive experiments across image classification and language modeling demonstrate that AdaSAM significantly improves generalization robustness, exhibits insensitivity to hyperparameter variations, and maintains efficient convergence. Both theoretical analysis and empirical validation confirm AdaSAM’s strong robustness and broad applicability across diverse tasks and architectures.

Technology Category

Application Category

📝 Abstract
Sharpness-Aware Minimization (SAM) optimizer enhances the generalization ability of the machine learning model by exploring the flat minima landscape through weight perturbations. Despite its empirical success, SAM introduces an additional hyper-parameter, the perturbation radius, which causes the sensitivity of SAM to it. Moreover, it has been proved that the perturbation radius and learning rate of SAM are constrained by problem-dependent parameters to guarantee convergence. These limitations indicate the requirement of parameter-tuning in practical applications. In this paper, we propose the algorithm LightSAM which sets the perturbation radius and learning rate of SAM adaptively, thus extending the application scope of SAM. LightSAM employs three popular adaptive optimizers, including AdaGrad-Norm, AdaGrad and Adam, to replace the SGD optimizer for weight perturbation and model updating, reducing sensitivity to parameters. Theoretical results show that under weak assumptions, LightSAM could converge ideally with any choices of perturbation radius and learning rate, thus achieving parameter-agnostic. We conduct preliminary experiments on several deep learning tasks, which together with the theoretical findings validate the the effectiveness of LightSAM.
Problem

Research questions and friction points this paper is trying to address.

SAM optimizer's sensitivity to hyper-parameters like perturbation radius
Need for parameter-tuning in SAM due to convergence constraints
LightSAM adaptively sets perturbation radius and learning rate for broader application
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive perturbation radius and learning rate
Uses AdaGrad-Norm, AdaGrad, Adam optimizers
Parameter-agnostic convergence under weak assumptions
🔎 Similar Papers
No similar papers found.
Y
Yifei Cheng
School of Cyber Science and Technology, Sun Yat-sen University, Shenzhen Campus
L
Li Shen
School of Cyber Science and Technology, Sun Yat-sen University, Shenzhen Campus
H
Hao Sun
School of Computer Science, University of Science and Technology of China
Nan Yin
Nan Yin
Mohamed bin Zayed University of Artificial Intelligence
Graph Neural NetworksMachine LearningAI4Science
Xiaochun Cao
Xiaochun Cao
Sun Yat-sen University
Computer VisionArtificial IntelligenceMultimediaMachine Learning
Enhong Chen
Enhong Chen
University of Science and Technology of China
data miningrecommender systemmachine learning