DiscQuant: A Quantization Method for Neural Networks Inspired by Discrepancy Theory

📅 2025-01-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the optimal rounding problem in weight quantization under fixed grids, moving beyond the conventional round-to-nearest (RTN) paradigm. We introduce DiscQuant, the first quantization framework that incorporates discrepancy theory to design data-dependent rounding strategies. Under the assumption that gradient approximations admit a low-rank structure, we theoretically establish ε-accuracy error control and derive an efficient algorithm. Our approach integrates discrepancy-theoretic modeling, low-rank approximation in gradient space, and data-driven optimization. Evaluated on Phi3-mini-3.8B and Llama3.1-8B, DiscQuant achieves 64% accuracy on GSM8k at 3.25-bit weight quantization—surpassing GPTQ by 10 percentage points and significantly outperforming both GPTQ and RTN. The method delivers a principled balance of quantization accuracy, computational efficiency, and theoretical rigor.

Technology Category

Application Category

📝 Abstract
Quantizing the weights of a neural network has two steps: (1) Finding a good low bit-complexity representation for weights (which we call the quantization grid) and (2) Rounding the original weights to values in the quantization grid. In this paper, we study the problem of rounding optimally given any quantization grid. The simplest and most commonly used way to round is Round-to-Nearest (RTN). By rounding in a data-dependent way instead, one can improve the quality of the quantized model significantly. We study the rounding problem from the lens of emph{discrepancy theory}, which studies how well we can round a continuous solution to a discrete solution without affecting solution quality too much. We prove that given $m=mathrm{poly}(1/epsilon)$ samples from the data distribution, we can round all but $O(m)$ model weights such that the expected approximation error of the quantized model on the true data distribution is $le epsilon$ as long as the space of gradients of the original model is approximately low rank (which we empirically validate). Our proof, which is algorithmic, inspired a simple and practical rounding algorithm called emph{DiscQuant}. In our experiments, we demonstrate that DiscQuant significantly improves over the prior state-of-the-art rounding method called GPTQ and the baseline RTN over a range of benchmarks on Phi3mini-3.8B and Llama3.1-8B. For example, rounding Phi3mini-3.8B to a fixed quantization grid with 3.25 bits per parameter using DiscQuant gets 64% accuracy on the GSM8k dataset, whereas GPTQ achieves 54% and RTN achieves 31% (the original model achieves 84%). We make our code available at https://github.com/jerry-chee/DiscQuant.
Problem

Research questions and friction points this paper is trying to address.

Quantized Neural Networks
Weight Optimization
Model Accuracy Preservation
Innovation

Methods, ideas, or system contributions that make the work stand out.

DiscQuant
Weight Quantization
Model Compression
🔎 Similar Papers
No similar papers found.