Scaling Laws for Precision in High-Dimensional Linear Regression

📅 2026-02-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the co-optimization of model scale, dataset size, and numerical precision in low-precision training to balance performance and computational cost. Leveraging a high-dimensional sketching-based linear regression framework, the work models quantization error and analyzes theoretical scaling laws to reveal a fundamental distinction between multiplicative and additive quantization: the former preserves the effective capacity of full-precision models, whereas the latter substantially diminishes it. Theoretical analysis characterizes the intricate coupling among model size, data volume, and precision, and extensive experiments confirm markedly different scaling behaviors between the two quantization paradigms. These findings provide a principled foundation and practical design guidelines for efficient low-precision training.

Technology Category

Application Category

📝 Abstract
Low-precision training is critical for optimizing the trade-off between model quality and training costs, necessitating the joint allocation of model size, dataset size, and numerical precision. While empirical scaling laws suggest that quantization impacts effective model and data capacities or acts as an additive error, the theoretical mechanisms governing these effects remain largely unexplored. In this work, we initiate a theoretical study of scaling laws for low-precision training within a high-dimensional sketched linear regression framework. By analyzing multiplicative (signal-dependent) and additive (signal-independent) quantization, we identify a critical dichotomy in their scaling behaviors. Our analysis reveals that while both schemes introduce an additive error and degrade the effective data size, they exhibit distinct effects on effective model size: multiplicative quantization maintains the full-precision model size, whereas additive quantization reduces the effective model size. Numerical experiments validate our theoretical findings. By rigorously characterizing the complex interplay among model scale, dataset size, and quantization error, our work provides a principled theoretical basis for optimizing training protocols under practical hardware constraints.
Problem

Research questions and friction points this paper is trying to address.

scaling laws
low-precision training
quantization
high-dimensional linear regression
effective model size
Innovation

Methods, ideas, or system contributions that make the work stand out.

scaling laws
low-precision training
quantization
high-dimensional regression
effective model size
🔎 Similar Papers
No similar papers found.
D
Dechen Zhang
Institute of Data Science, The University of Hong Kong
X
Xuan Tang
School of Computing & Data Science, The University of Hong Kong
Yingyu Liang
Yingyu Liang
The University of Hong Kong
machine learning
Difan Zou
Difan Zou
The University of Hong Kong
Machine LearningDeep LearningOptimizationStochastic AlgorithmsSignal Processing