🤖 AI Summary
This work addresses the unclear implicit norm-balancing mechanism of Sharpness-Aware Minimization (SAM) in tensorized and scale-invariant models. We propose “norm deviation” as a unified metric quantifying global norm imbalance and design Deviation-Aware Scaling (DAS), an explicit method modeling SAM’s implicit regularization behavior. Through gradient flow analysis and scale-invariance modeling, we reveal SAM’s dynamical essence: suppressing high-norm cores while equalizing low-norm ones. DAS incorporates data-adaptive scaling and matches or surpasses SAM’s performance across tensor completion, noise-robust training, model compression, and efficient fine-tuning—while reducing computational overhead by 30–50%. To our knowledge, this is the first work to disentangle SAM’s generalization effect into an interpretable norm-equalization mechanism and provide a lightweight, plug-and-play explicit implementation.
📝 Abstract
Sharpness-Aware Minimization (SAM) has been proven to be an effective optimization technique for improving generalization in overparameterized models. While prior works have explored the implicit regularization of SAM in simple two-core scale-invariant settings, its behavior in more general tensorized or scale-invariant models remains underexplored. In this work, we leverage scale-invariance to analyze the norm dynamics of SAM in general tensorized models. We introduce the notion of emph{Norm Deviation} as a global measure of core norm imbalance, and derive its evolution under SAM using gradient flow analysis. We show that SAM's implicit control of Norm Deviation is governed by the covariance between core norms and their gradient magnitudes. Motivated by these findings, we propose a simple yet effective method, emph{Deviation-Aware Scaling (DAS)}, which explicitly mimics this regularization behavior by scaling core norms in a data-adaptive manner. Our experiments across tensor completion, noisy training, model compression, and parameter-efficient fine-tuning confirm that DAS achieves competitive or improved performance over SAM, while offering reduced computational overhead.