🤖 AI Summary
This work addresses the challenge of sparse training under high sparsity, where gradient noise severely hinders convergence and degrades generalization. To mitigate this, the authors propose the first integration of zeroth-order (ZO) optimization into Sharpness-Aware Minimization (SAM). Their method performs only a single backward pass during the perturbation step and leverages zeroth-order gradient estimation to substantially reduce both gradient variance and computational overhead. While preserving SAM’s ability to locate flat minima, the approach halves the backpropagation cost, thereby significantly enhancing the stability and convergence speed of sparse training, as well as improving model robustness under distributional shifts.
📝 Abstract
Deep learning models, despite their impressive achievements, suffer from high computational costs and memory requirements, limiting their usability in resource-constrained environments. Sparse neural networks significantly alleviate these constraints by dramatically reducing parameter count and computational overhead. However, existing sparse training methods often experience chaotic and noisy gradient signals, severely hindering convergence and generalization performance, particularly at high sparsity levels. To tackle this critical challenge, we propose Zero-Order Sharpness-Aware Minimization (ZO-SAM), a novel optimization framework that strategically integrates zero-order optimization within the SAM approach. Unlike traditional SAM, ZO-SAM requires only a single backpropagation step during perturbation, selectively utilizing zero-order gradient estimations. This innovative approach reduces the backpropagation computational cost by half compared to conventional SAM, significantly lowering gradient variance and effectively eliminating associated computational overhead. By harnessing SAM's capacity for identifying flat minima, ZO-SAM stabilizes the training process and accelerates convergence. These efficiency gains are particularly important in sparse training scenarios, where computational cost is the primary bottleneck that limits the practicality of SAM. Moreover, models trained with ZO-SAM exhibit improved robustness under distribution shift, further broadening its practicality in real-world deployments.