🤖 AI Summary
This work investigates how optimizer selection affects the robustness of model quantization—encompassing both post-training quantization (PTQ) and quantization-aware training (QAT). Addressing the limitation of conventional outlier-based metrics in predicting quantization performance, we propose modeling optimizer behavior through the lens of quantization error propagation and identify Shampoo as uniquely effective at suppressing error accumulation. We systematically evaluate six optimizers across models ranging from 50M to 1.5B parameters, integrating PTQ/QAT, hyperparameter tuning, and scaling-law analysis. Results show that Shampoo achieves the smallest accuracy degradation in QAT and attains optimal parameter efficiency and quantization robustness; its second-order structural properties mitigate quantization distortion in weight-sensitive regions. This study is the first to establish a principled link between intrinsic geometric properties of optimizers and quantization stability, offering a novel paradigm for robust low-bit training.
📝 Abstract
As new optimizers gain traction and model quantization becomes standard for efficient deployment, a key question arises: how does the choice of optimizer affect model performance in the presence of quantization? Despite progress in both areas, systematic evidence on optimizer-quantization interactions remains limited. To fill this gap, we study the impact of optimizer choice on model robustness under quantization, considering both post-training quantization (PTQ), and quantization-aware training (QAT). We first train full-precision models, ranging from 50M to 1.5B parameters, with six optimizers, to explore the hyperparameter landscape, and establish well-tuned baselines. We then apply PTQ to evaluate how model performance degrades when trained with different optimizers. We find that outlier-related metrics, such as the max-to-mean ratio (MMR) and Kurtosis, fail to predict the PTQ performance across different optimizers. We show analytically that this is due to the MMR capturing only isolated layer errors, while ignoring how quantization errors accumulate and propagate through the network. To study the QAT degradation, we train quantized models from scratch and compare them to our original-precision baselines. We find that optimizers performing well in the original pretraining setup may not remain optimal under QAT, and that models trained with Shampoo show the lowest accuracy degradation. Finally, we derive scaling laws for quantization-aware training under different optimizers, showing that Shampoo achieves the highest parameter efficiency of all tested optimizers.