Provable Post-Training Quantization: Theoretical Analysis of OPTQ and Qronos

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of rigorous, non-asymptotic theoretical guarantees for OPTQ (GPTQ) and Qronos—two prominent post-training quantization (PTQ) algorithms. We establish the first deterministic and stochastic non-asymptotic error bounds for these methods. Our analysis integrates matrix perturbation theory, stochastic process modeling, and error propagation analysis to derive tight upper bounds on both the ℓ²- and stronger ℓ∞-norm quantization errors. These bounds explicitly characterize the theoretical interplay among calibration dataset size, regularization strength, and quantization alphabet cardinality. Key contributions include: (i) providing formal justification for feature-ranking heuristics used in practice; (ii) explaining Qronos’s empirical superiority over OPTQ via its implicit regularization; and (iii) offering principled guidance for selecting critical hyperparameters—especially the regularization coefficient. Extensive experiments across diverse large language models confirm that theory-informed parameter configurations significantly reduce quantization error while preserving model performance.

Technology Category

Application Category

📝 Abstract
Post-training quantization (PTQ) has become a crucial tool for reducing the memory and compute costs of modern deep neural networks, including large language models (LLMs). Among PTQ algorithms, the OPTQ framework-also known as GPTQ-has emerged as a leading method due to its computational efficiency and strong empirical performance. Despite its widespread adoption, however, OPTQ lacks rigorous quantitative theoretical guarantees. This paper presents the first quantitative error bounds for both deterministic and stochastic variants of OPTQ, as well as for Qronos, a recent related state-of-the-art PTQ algorithm. We analyze how OPTQ's iterative procedure induces quantization error and derive non-asymptotic 2-norm error bounds that depend explicitly on the calibration data and a regularization parameter that OPTQ uses. Our analysis provides theoretical justification for several practical design choices, including the widely used heuristic of ordering features by decreasing norm, as well as guidance for selecting the regularization parameter. For the stochastic variant, we establish stronger infinity-norm error bounds, which enable control over the required quantization alphabet and are particularly useful for downstream layers and nonlinearities. Finally, we extend our analysis to Qronos, providing new theoretical bounds, for both its deterministic and stochastic variants, that help explain its empirical advantages.
Problem

Research questions and friction points this paper is trying to address.

Lack of theoretical guarantees for OPTQ quantization
Need error bounds for deterministic and stochastic OPTQ variants
Theoretical analysis of Qronos PTQ algorithm performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quantitative error bounds for OPTQ variants
Theoretical justification for feature ordering
Infinity-norm bounds for stochastic quantization
🔎 Similar Papers
No similar papers found.