Scaling Laws for Floating Point Quantization Training

📅 2025-01-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work systematically investigates the impact mechanisms of floating-point quantization on low-precision training of large language models (LLMs). Method: We identify and analyze critical design dimensions—including exponent/mantissa bit allocation, quantization target selection, and scaling factor granularity—and propose the first unified floating-point scaling law for low-precision training. Contribution/Results: Our theoretical and empirical analysis reveals a critical data-volume phenomenon during low-precision training; demonstrates that exponent bits exert marginally greater influence on training stability than mantissa bits; identifies 4–8 bits as a high-cost-efficiency precision range; and derives hardware-friendly optimal exponent-to-mantissa bit ratios. The proposed scaling law achieves <3% prediction error across mainstream LLMs, enabling joint optimization of computational resources and numerical precision.

Technology Category

Application Category

📝 Abstract
Low-precision training is considered an effective strategy for reducing both training and downstream inference costs. Previous scaling laws for precision mainly focus on integer quantization, which pay less attention to the constituents in floating-point quantization and thus cannot well fit the LLM losses in this scenario. In contrast, while floating-point quantization training is more commonly implemented in production, the research on it has been relatively superficial. In this paper, we thoroughly explore the effects of floating-point quantization targets, exponent bits, mantissa bits, and the calculation granularity of the scaling factor in floating-point quantization training performance of LLM models. While presenting an accurate floating-point quantization unified scaling law, we also provide valuable suggestions for the community: (1) Exponent bits contribute slightly more to the model performance than mantissa bits. We provide the optimal exponent-mantissa bit ratio for different bit numbers, which is available for future reference by hardware manufacturers; (2) We discover the formation of the critical data size in low-precision LLM training. Too much training data exceeding the critical data size will inversely bring in degradation of LLM performance; (3) The optimal floating-point quantization precision is directly proportional to the computational power, but within a wide computational power range, we estimate that the best cost-performance precision lies between 4-8 bits.
Problem

Research questions and friction points this paper is trying to address.

Floating-point Quantization
Low-precision Training
Large Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Floating-point Quantization
Optimized Quantization Rules
Computational Efficiency vs. Quantization Precision
🔎 Similar Papers
No similar papers found.
Xingwu Sun
Xingwu Sun
Tencent
Natural Language ProcessingQuestion AnsweringQuestion Generation
Shuaipeng Li
Shuaipeng Li
Tencent
Ruobing Xie
Ruobing Xie
Tencent
Large Language ModelRecommender SystemNatural Language Processing
Weidong Han
Weidong Han
Tencent Inc., School of Data Science, Fudan University
Large Language ModelNLPMulti-Modal
K
Kan Wu
Tencent Hunyuan
Z
Zhen Yang
Tencent Hunyuan
Y
Yixing Li
Tencent Hunyuan, The Chinese University of Hong Kong
A
An Wang
Tencent Hunyuan, Tokyo Institute of Technology
S
Shuai Li
Tencent Hunyuan
J
Jinbao Xue
Tencent Hunyuan
Y
Yu Cheng
The Chinese University of Hong Kong
Y
Yangyu Tao
Tencent Hunyuan
Z
Zhanhui Kang
Tencent Hunyuan
C
Chengzhong Xu
University of Macau
D
Di Wang
University of Macau, Tencent Hunyuan
J
Jie Jiang
Tencent Hunyuan