ICQuant: Index Coding enables Low-bit LLM Quantization

📅 2025-05-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Post-training quantization (PTQ) of large language models (LLMs) at ultra-low bit-widths (e.g., ≤3 bits) suffers from severe accuracy degradation due to outlier channels/tokens, and existing outlier suppression techniques struggle to balance precision gains with computational and memory overhead. Method: We propose an outlier-aware index encoding quantization framework that innovatively adapts exponent coding theory to weight quantization. Leveraging outlier statistics, it dynamically assigns compact indices to mitigate quantization error, requiring only ~0.3 extra bits per weight to halve the effective quantization range. Contribution/Results: Our method achieves state-of-the-art PTQ performance without fine-tuning—even at 2–3 bits—matching or exceeding optimized fine-tuned quantizers (e.g., PV-tuning). At 2.3 bits, it improves zero-shot accuracy of Llama3-70B by 130% over QTIP and 150% over QuIP#, substantially outperforming prior PTQ approaches.

Technology Category

Application Category

📝 Abstract
The rapid deployment of Large Language Models (LLMs) highlights the need for efficient low-bit post-training quantization (PTQ), due to their high memory costs. A key challenge in weight quantization is the presence of outliers, which inflate quantization ranges and lead to large errors. While a number of outlier suppression techniques have been proposed, they either: fail to effectively shrink the quantization range, or incur (relatively) high bit overhead. In this paper, we present ICQuant, a novel framework that leverages outlier statistics to design an efficient index coding scheme for outlier-aware weight-only quantization. Compared to existing outlier suppression techniques requiring $approx 1$ bit overhead to halve the quantization range, ICQuant requires only $approx 0.3$ bits; a significant saving in extreme compression regimes (e.g., 2-3 bits per weight). ICQuant can be used on top of any existing quantizers to eliminate outliers, improving the quantization quality. Using just 2.3 bits per weight and simple scalar quantizers, ICQuant improves the zero-shot accuracy of the 2-bit Llama3-70B model by up to 130% and 150% relative to QTIP and QuIP#; and it achieves comparable performance to the best-known fine-tuned quantizer (PV-tuning) without fine-tuning.
Problem

Research questions and friction points this paper is trying to address.

Efficient low-bit quantization for Large Language Models
Reducing quantization errors caused by weight outliers
Minimizing bit overhead in outlier-aware weight quantization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages outlier statistics for efficient index coding
Reduces bit overhead to approximately 0.3 bits
Improves quantization quality without fine-tuning
🔎 Similar Papers
No similar papers found.