ROSAQ: Rotation-based Saliency-Aware Weight Quantization for Efficiently Compressing Large Language Models

📅 2025-06-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address erroneous channel importance estimation in large language model (LLM) quantization—caused by redundancy and noise in the original feature space—this paper proposes a saliency-aware mixed-precision quantization method leveraging PCA’s rotational invariance. The core method projects channel activations into a low-dimensional principal component space via PCA, enabling more robust saliency identification; channel selection is then guided by the top-K eigenvalues. Subsequently, an FP16/INT3–4 mixed-precision quantization scheme is designed, integrated with kernel-level optimizations. Key contributions include: (i) the first adaptation of channel saliency detection to the PCA-transformed subspace, enhancing robustness against feature-space distortions; and (ii) an efficient, hardware-aware quantization framework. Experiments on 256-token generation (batch size = 64) demonstrate a 2.3× speedup over FP16 baseline while outperforming state-of-the-art saliency-based quantization methods in both accuracy and latency.

Technology Category

Application Category

📝 Abstract
Quantization has been widely studied as an effective technique for reducing the memory requirement of large language models (LLMs), potentially improving the latency time as well. Utilizing the characteristic of rotational invariance of transformer, we propose the rotation-based saliency-aware weight quantization (ROSAQ), which identifies salient channels in the projection feature space, not in the original feature space, where the projected"principal"dimensions are naturally considered as"salient"features. The proposed ROSAQ consists of 1) PCA-based projection, which first performs principal component analysis (PCA) on a calibration set and transforms via the PCA projection, 2) Salient channel dentification, which selects dimensions corresponding to the K-largest eigenvalues as salient channels, and 3) Saliency-aware quantization with mixed-precision, which uses FP16 for salient dimensions and INT3/4 for other dimensions. Experiment results show that ROSAQ shows improvements over the baseline saliency-aware quantization on the original feature space and other existing quantization methods. With kernel fusion, ROSAQ presents about 2.3x speed up over FP16 implementation in generating 256 tokens with a batch size of 64.
Problem

Research questions and friction points this paper is trying to address.

Reducing memory requirements of large language models via quantization
Identifying salient channels in projected feature space for better quantization
Improving latency and speed in model execution with mixed-precision quantization
Innovation

Methods, ideas, or system contributions that make the work stand out.

PCA-based projection for feature transformation
Salient channel identification via largest eigenvalues
Mixed-precision FP16/INT3-4 saliency-aware quantization
🔎 Similar Papers