🤖 AI Summary
To address the severe accuracy degradation in 4-bit quantization of large language models (LLMs) caused by activation outliers, this paper proposes an SVD-based activation decomposition framework: outlier components in the activation tensor are orthogonally projected onto a low-dimensional subspace and retained in full precision, while the remaining components undergo 4-bit quantization. The method jointly incorporates W4A4/A8 hybrid quantization and LoRA-style parameter-efficient fine-tuning. Evaluated on Llama-3-8B and Qwen-2.5, it achieves 94–96% of the W4A4 baseline accuracy out-of-the-box and reaches 98% after fine-tuning—substantially outperforming existing 4-bit quantization approaches. The core contribution lies in the first integration of SVD-driven orthogonal activation decomposition into the quantization pipeline, effectively balancing outlier modeling fidelity with compression efficiency. This work establishes a new paradigm for high-fidelity, low-overhead edge deployment of LLMs.
📝 Abstract
Large Language Models (LLMs) excel in diverse applications but suffer inefficiency due to massive scale. While quantization reduces computational costs, existing methods degrade accuracy in medium-sized LLMs (e.g., Llama-3-8B) due to activation outliers. To address this, we propose QUAD (Quantization with Activation Decomposition), a framework leveraging Singular Value Decomposition (SVD) to suppress activation outliers for effective 4-bit quantization. QUAD estimates activation singular vectors offline using calibration data to construct an orthogonal transformation matrix P, shifting outliers to additional dimensions in full precision while quantizing rest components to 4-bit. Additionally, QUAD enables parameter-efficient fine-tuning via adaptable full-precision outlier weights, narrowing the accuracy gap between quantized and full-precision models. Experiments demonstrate that QUAD achieves 94% ~ 96% accuracy under W4A4 quantization and 98% accuracy with W4A4/A8 and parameter-efficient fine-tuning for Llama-3 and Qwen-2.5 models. Our code is available at href{https://github.com/hyx1999/Quad}{repository}.