RUQuant: Towards Refining Uniform Quantization for Large Language Models

📅 2026-04-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the significant accuracy degradation in uniform quantization of large language models caused by non-uniform activation distributions. The authors propose RUQuant, which, for the first time, incorporates the Lloyd-Max optimality criterion into activation quantization. RUQuant employs a two-stage orthogonal transformation: an initial approximate optimal mapping via block-wise Householder reflections and Givens rotations, followed by global fine-tuning guided by Transformer output discrepancies. Notably, the method requires no retraining and achieves 99.8% of full-precision accuracy with W6A6 quantization and 97% with W4A4 on a 13B-parameter model, with the entire optimization process taking approximately one minute.
📝 Abstract
The increasing size and complexity of large language models (LLMs) have raised significant challenges in deployment efficiency, particularly under resource constraints. Post-training quantization (PTQ) has emerged as a practical solution by compressing models without requiring retraining. While existing methods focus on uniform quantization schemes for both weights and activations, they often suffer from substantial accuracy degradation due to the non-uniform nature of activation distributions. In this work, we revisit the activation quantization problem from a theoretical perspective grounded in the Lloyd-Max optimality conditions. We identify the core issue as the non-uniform distribution of activations within the quantization interval, which causes the optimal quantization point under the Lloyd-Max criterion to shift away from the midpoint of the interval. To address this issue, we propose a two-stage orthogonal transformation method, RUQuant. In the first stage, activations are divided into blocks. Each block is mapped to uniformly sampled target vectors using composite orthogonal matrices, which are constructed from Householder reflections and Givens rotations. In the second stage, a global Householder reflection is fine-tuned to further minimize quantization error using Transformer output discrepancies. Empirical results show that our method achieves near-optimal quantization performance without requiring model fine-tuning: RUQuant achieves 99.8% of full-precision accuracy with W6A6 and 97% with W4A4 quantization for a 13B LLM, within approximately one minute. A fine-tuned variant yields even higher accuracy, demonstrating the effectiveness and scalability of our approach.
Problem

Research questions and friction points this paper is trying to address.

large language models
post-training quantization
uniform quantization
activation distribution
quantization error
Innovation

Methods, ideas, or system contributions that make the work stand out.

RUQuant
uniform quantization
orthogonal transformation
Lloyd-Max optimality
post-training quantization
🔎 Similar Papers
No similar papers found.