Asymptotic Unbiased Sample Sampling to Speed Up Sharpness-Aware Minimization

📅 2024-06-12
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
While Sharpness-Aware Minimization (SAM) improves generalization, its computational overhead is approximately twice that of standard SGD. Method: We propose PUS-SAM (Progressive Unbiased Sampling for SAM), the first theoretically grounded sampling framework for SAM based on Gradient Norm Sensitivity (GNS). PUS-SAM approximates GNS efficiently via the loss difference before and after perturbation, enabling unbiased, dynamic, and architecture-agnostic subset selection. Contribution/Results: As a plug-and-play module, PUS-SAM fully reproduces SAM’s accuracy on CIFAR-10/100 and Tiny-ImageNet while accelerating training by over 70%. It further demonstrates zero performance degradation on downstream tasks—including human pose estimation and neural network quantization—significantly broadening SAM’s practical applicability without compromising optimization quality.

Technology Category

Application Category

📝 Abstract
Sharpness-Aware Minimization (SAM) has emerged as a promising approach for effectively reducing the generalization error. However, SAM incurs twice the computational cost compared to base optimizer (e.g., SGD). We propose Asymptotic Unbiased Sampling with respect to iterations to accelerate SAM (AUSAM), which maintains the model's generalization capacity while significantly enhancing computational efficiency. Concretely, we probabilistically sample a subset of data points beneficial for SAM optimization based on a theoretically guaranteed criterion, i.e., the Gradient Norm of each Sample (GNS). We further approximate the GNS by the difference in loss values before and after perturbation in SAM. As a plug-and-play, architecture-agnostic method, our approach consistently accelerates SAM across a range of tasks and networks, i.e., classification, human pose estimation and network quantization. On CIFAR10/100 and Tiny-ImageNet, AUSAM achieves results comparable to SAM while providing a speedup of over 70%. Compared to recent dynamic data pruning methods, AUSAM is better suited for SAM and excels in maintaining performance. Additionally, AUSAM accelerates optimization in human pose estimation and model quantization without sacrificing performance, demonstrating its broad practicality.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational cost of Sharpness-Aware Minimization (SAM)
Maintaining generalization capacity while improving efficiency
Accelerating SAM across diverse tasks without performance loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Asymptotic Unbiased Sampling accelerates SAM
Gradient Norm of Sample guides data selection
Plug-and-play method maintains generalization capacity
🔎 Similar Papers
No similar papers found.