🤖 AI Summary
This work addresses the performance limitations of quantization-aware training (QAT) in ultra-low-bit (2–4 bit) settings, where gradient mismatch and training instability often hinder convergence. To overcome these challenges, the authors propose StableQAT, which introduces discrete Fourier analysis into QAT for the first time to construct a family of lightweight, smooth, and bounded gradient surrogate functions that rigorously generalize the straight-through estimator (STE). This approach significantly enhances training stability, model accuracy, and robustness to hyperparameter choices, all while incurring negligible additional training overhead. As a result, StableQAT enables efficient and reliable ultra-low-bit quantization-aware training without compromising computational efficiency.
📝 Abstract
Quantization-aware training (QAT) is essential for deploying large models under strict memory and latency constraints, yet achieving stable and robust optimization at ultra-low bitwidths remains challenging. Common approaches based on the straight-through estimator (STE) or soft quantizers often suffer from gradient mismatch, instability, or high computational overhead. As such, we propose StableQAT, a unified and efficient QAT framework that stabilizes training in ultra low-bit settings via a novel, lightweight, and theoretically grounded surrogate for backpropagation derived from a discrete Fourier analysis of the rounding operator. StableQAT strictly generalizes STE as the latter arises as a special case of our more expressive surrogate family, yielding smooth, bounded, and inexpensive gradients that improve QAT training performance and stability across various hyperparameter choices. In experiments, StableQAT exhibits stable and efficient QAT at 2-4 bit regimes, demonstrating improved training stability, robustness, and superior performance with negligible training overhead against standard QAT techniques. Our code is available at https://github.com/microsoft/StableQAT.