🤖 AI Summary
This work addresses the computational resource allocation between full-precision (FP) and quantized training phases in quantization-aware training (QAT), where optimal scheduling remains poorly understood.
Method: We propose a tokens-per-parameter-byte loss scaling law—the first to characterize how the optimal QAT-to-FP ratio increases with total compute budget. We further introduce “QAT cooling,” a novel learning-rate-scheduling strategy that dynamically reduces redundant quantized training iterations. Combining scaling-law modeling with empirical analysis, we accurately predict optimal QAT duration and final accuracy across diverse model sizes and bit-widths.
Results: Under fixed compute budgets, our method significantly improves quantized model accuracy. It also systematically uncovers the quantitative trade-off between memory footprint (governed by bit-width and QAT duration) and accuracy—enabling principled, resource-aware QAT deployment.
📝 Abstract
Quantization-aware training (QAT) is a leading technique for improving the accuracy of quantized neural networks. Previous work has shown that decomposing training into a full-precision (FP) phase followed by a QAT phase yields superior accuracy compared to QAT alone. However, the optimal allocation of compute between the FP and QAT phases remains unclear. We conduct extensive experiments with various compute budgets, QAT bit widths, and model sizes from 86.0M to 2.2B to investigate how different QAT durations impact final performance. We demonstrate that, contrary to previous findings, the loss-optimal ratio of QAT to FP training increases with the total amount of compute. Moreover, the optimal fraction can be accurately predicted for a wide range of model sizes and quantization widths using the tokens-per-parameter-byte statistic. From experimental data, we derive a loss scaling law that predicts both optimal QAT ratios and final model performance across different QAT/FP compute allocation strategies and QAT bit widths. We use the scaling law to make further predictions, which we verify experimentally, including which QAT bit width is optimal under a given memory constraint and how QAT accuracy with different bit widths compares to full-precision model accuracy. Additionally, we propose a novel cooldown and QAT fusion approach that performs learning rate decay jointly with quantization-aware training, eliminating redundant full-precision model updates and achieving significant compute savings. These findings provide practical insights into efficient QAT planning and enable the training of higher-quality quantized models with the same compute budget.