🤖 AI Summary
While Sharpness-Aware Minimization (SAM) improves generalization, it incurs approximately double the computational overhead of standard SGD. This paper proposes Adaptive Regularized SAM (ARSAM), the first method to decompose the SAM gradient into an SGD component and a Projection of the Second-order Gradient onto the First-order gradient direction (PSF). We empirically identify PSF as the dominant driver of flat-minima search. Building on this insight, ARSAM introduces an adaptive sampling-reuse-hybrid mechanism to dynamically update PSF estimates and incorporates momentum compensation to stabilize optimization. On CIFAR-10/100, ARSAM achieves accuracy comparable to SAM while accelerating training by ~40%. Its effectiveness and generality are further validated on human pose estimation and model quantization tasks. Key contributions include: (i) a novel gradient decomposition perspective; (ii) the discovery of PSF’s dominance in flat-minima optimization; and (iii) an efficient PSF reuse mechanism that significantly reduces computational cost without sacrificing performance.
📝 Abstract
Sharpness-Aware Minimization (SAM) improves model generalization but doubles the computational cost of Stochastic Gradient Descent (SGD) by requiring twice the gradient calculations per optimization step. To mitigate this, we propose Adaptively sampling-Reusing-mixing decomposed gradients to significantly accelerate SAM (ARSAM). Concretely, we firstly discover that SAM's gradient can be decomposed into the SGD gradient and the Projection of the Second-order gradient onto the First-order gradient (PSF). Furthermore, we observe that the SGD gradient and PSF dynamically evolve during training, emphasizing the growing role of the PSF to achieve a flat minima. Therefore, ARSAM is proposed to the reused PSF and the timely updated PSF still maintain the model's generalization ability. Extensive experiments show that ARSAM achieves state-of-the-art accuracies comparable to SAM across diverse network architectures. On CIFAR-10/100, ARSAM is comparable to SAM while providing a speedup of about 40%. Moreover, ARSAM accelerates optimization for the various challenge tasks ( extit{e.g.}, human pose estimation, and model quantization) without sacrificing performance, demonstrating its broad practicality.% The code is publicly accessible at: https://github.com/ajiaaa/ARSAM.