Dissecting Quantization Error: A Concentration-Alignment Perspective

📅 2026-03-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Although quantization enhances the efficiency of large language models and vision models, it often suffers from accuracy degradation due to error accumulation. This work systematically analyzes quantization error in linear layers from the perspective of signal-to-quantization-noise ratio (SQNR), revealing for the first time that such error depends not only on the concentration of weights and activations but, more critically, on the alignment between their dominant variation directions. Building on this insight, the authors propose a lightweight, block-wise Concentration–Alignment Transformation (CAT) that jointly optimizes concentration and alignment via covariance estimation, establishing a unified post-training quantization framework. Extensive experiments demonstrate that, under 4-bit quantization, CAT consistently matches or surpasses existing transformation-based methods across multiple large language models, validating the effectiveness of the proposed mechanism.

Technology Category

Application Category

📝 Abstract
Quantization can drastically increase the efficiency of large language and vision models, but typically incurs an accuracy drop. Recently, function-preserving transforms (e.g. rotations, Hadamard transform, channel-wise scaling) have been successfully applied to reduce post-training quantization error, yet a principled explanation remains elusive. We analyze linear-layer quantization via the signal-to-quantization-noise ratio (SQNR), showing that for uniform integer quantization at a fixed bit width, SQNR decomposes into (i) the concentration of weights and activations (capturing spread and outliers), and (ii) the alignment of their dominant variation directions. This reveals an actionable insight: beyond concentration - the focus of most prior transforms (e.g. rotations or Hadamard) - improving alignment between weight and activation can further reduce quantization error. Motivated by this, we introduce block Concentration-Alignment Transforms (CAT), a lightweight linear transformation that uses a covariance estimate from a small calibration set to jointly improve concentration and alignment, approximately maximizing SQNR. Experiments across several LLMs show that CAT consistently matches or outperforms prior transform-based quantization methods at 4-bit precision, confirming the insights gained in our framework.
Problem

Research questions and friction points this paper is trying to address.

quantization error
concentration
alignment
large language models
post-training quantization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Concentration-Alignment
Quantization Error
SQNR
Function-Preserving Transform
Post-Training Quantization
🔎 Similar Papers
No similar papers found.