🤖 AI Summary
Quantization robustness of large language models (LLMs) during post-training remains poorly understood, particularly how training dynamics influence quantization error.
Method: We conduct systematic controlled experiments on open-source LLM training trajectories (up to 32B parameters, 15T tokens), quantitatively analyzing correlations between hyperparameters—such as learning rate decay schedules and dataset scale—and quantization error.
Contribution/Results: We discover a significant decoupling between validation loss in late-stage training and quantization error, challenging the conventional assumption that larger data volumes inevitably exacerbate quantization degradation. Building on this insight, we propose a training-dynamics-aware quantization robustness optimization paradigm: specific learning rate scheduling strategies substantially reduce quantization error, validated empirically on百亿-token-scale experiments. Our work establishes an interpretable, reproducible co-optimization framework for training and quantization, enabling efficient low-bit LLM deployment.
📝 Abstract
While post-training quantization is widely adopted for efficient deployment of large language models, the mechanisms underlying quantization robustness remain unclear. We conduct a comprehensive analysis of quantization degradation across open-source language model training trajectories up to 32B parameters and 15T training tokens to accurately assess the relationship between training dynamics and quantization performance. Our key finding is that quantization errors in large-scale training runs are driven by a complex interplay between learning rate and other training hyperparameters. Specifically, once learning rates decay, validation loss and quantization error diverge, largely independent of training data scale. To investigate interventions on the training dynamics and identify specific configurations that can modulate quantization robustness favorably, we train our own models in controlled experiments up to 100B tokens. Our results challenge the assumption that increasing dataset scale inherently compromises quantization effectiveness, demonstrating instead that strategic training hyperparameter interventions can improve quantization quality at scale.