🤖 AI Summary
To address erroneous channel importance estimation in large language model (LLM) quantization—caused by redundancy and noise in the original feature space—this paper proposes a saliency-aware mixed-precision quantization method leveraging PCA’s rotational invariance. The core method projects channel activations into a low-dimensional principal component space via PCA, enabling more robust saliency identification; channel selection is then guided by the top-K eigenvalues. Subsequently, an FP16/INT3–4 mixed-precision quantization scheme is designed, integrated with kernel-level optimizations. Key contributions include: (i) the first adaptation of channel saliency detection to the PCA-transformed subspace, enhancing robustness against feature-space distortions; and (ii) an efficient, hardware-aware quantization framework. Experiments on 256-token generation (batch size = 64) demonstrate a 2.3× speedup over FP16 baseline while outperforming state-of-the-art saliency-based quantization methods in both accuracy and latency.
📝 Abstract
Quantization has been widely studied as an effective technique for reducing the memory requirement of large language models (LLMs), potentially improving the latency time as well. Utilizing the characteristic of rotational invariance of transformer, we propose the rotation-based saliency-aware weight quantization (ROSAQ), which identifies salient channels in the projection feature space, not in the original feature space, where the projected"principal"dimensions are naturally considered as"salient"features. The proposed ROSAQ consists of 1) PCA-based projection, which first performs principal component analysis (PCA) on a calibration set and transforms via the PCA projection, 2) Salient channel dentification, which selects dimensions corresponding to the K-largest eigenvalues as salient channels, and 3) Saliency-aware quantization with mixed-precision, which uses FP16 for salient dimensions and INT3/4 for other dimensions. Experiment results show that ROSAQ shows improvements over the baseline saliency-aware quantization on the original feature space and other existing quantization methods. With kernel fusion, ROSAQ presents about 2.3x speed up over FP16 implementation in generating 256 tokens with a batch size of 64.