Unpacking the Implicit Norm Dynamics of Sharpness-Aware Minimization in Tensorized Models

📅 2025-08-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the unclear implicit norm-balancing mechanism of Sharpness-Aware Minimization (SAM) in tensorized and scale-invariant models. We propose “norm deviation” as a unified metric quantifying global norm imbalance and design Deviation-Aware Scaling (DAS), an explicit method modeling SAM’s implicit regularization behavior. Through gradient flow analysis and scale-invariance modeling, we reveal SAM’s dynamical essence: suppressing high-norm cores while equalizing low-norm ones. DAS incorporates data-adaptive scaling and matches or surpasses SAM’s performance across tensor completion, noise-robust training, model compression, and efficient fine-tuning—while reducing computational overhead by 30–50%. To our knowledge, this is the first work to disentangle SAM’s generalization effect into an interpretable norm-equalization mechanism and provide a lightweight, plug-and-play explicit implementation.

Technology Category

Application Category

📝 Abstract
Sharpness-Aware Minimization (SAM) has been proven to be an effective optimization technique for improving generalization in overparameterized models. While prior works have explored the implicit regularization of SAM in simple two-core scale-invariant settings, its behavior in more general tensorized or scale-invariant models remains underexplored. In this work, we leverage scale-invariance to analyze the norm dynamics of SAM in general tensorized models. We introduce the notion of emph{Norm Deviation} as a global measure of core norm imbalance, and derive its evolution under SAM using gradient flow analysis. We show that SAM's implicit control of Norm Deviation is governed by the covariance between core norms and their gradient magnitudes. Motivated by these findings, we propose a simple yet effective method, emph{Deviation-Aware Scaling (DAS)}, which explicitly mimics this regularization behavior by scaling core norms in a data-adaptive manner. Our experiments across tensor completion, noisy training, model compression, and parameter-efficient fine-tuning confirm that DAS achieves competitive or improved performance over SAM, while offering reduced computational overhead.
Problem

Research questions and friction points this paper is trying to address.

Analyzing norm dynamics in tensorized scale-invariant models
Investigating SAM's implicit control of core norm imbalance
Proposing efficient scaling method to mimic SAM regularization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scale-invariance analysis for tensorized models
Norm Deviation measure for core imbalance
Deviation-Aware Scaling method mimicking regularization
T
Tianxiao Cao
Graduate School of Informatics, Kyoto University, Kyoto, Japan
K
Kyohei Atarashi
Graduate School of Informatics, Kyoto University, Kyoto, Japan
Hisashi Kashima
Hisashi Kashima
Professor, Kyoto University
Machine LearningData MiningGraphs and NetworksHuman ComputationHuman-in-the-loop AI