LRQ: Optimizing Post-Training Quantization for Large Language Models by Learning Low-Rank Weight-Scaling Matrices

📅 2024-07-16
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Existing post-training quantization (PTQ) methods suffer from substantial accuracy degradation when jointly optimizing weight and activation quantization, particularly on challenging multi-task benchmarks such as MMLU. To address this, we propose Low-Rank Quantization (LRQ), the first PTQ framework that replaces full-scale learnable scaling matrices with low-rank parameterizations—enabling fine-grained, token-wise independent weight scaling while drastically reducing parameter overhead. LRQ unifies low-rank decomposition, weight quantization, and activation quantization into a single co-optimization objective and introduces a hybrid per-token/per-tensor activation quantization strategy. Experiments demonstrate that LRQ consistently outperforms state-of-the-art methods across diverse bit-width configurations: 8-bit weight-only, 4-bit weight + 8-bit activation, and ultra-low-bit weight-only quantization. On MMLU and other multi-task benchmarks, LRQ significantly mitigates accuracy loss, achieving superior generalization and enhanced task adaptability.

Technology Category

Application Category

📝 Abstract
With the commercialization of large language models (LLMs), weight-activation quantization has emerged to compress and accelerate LLMs, achieving high throughput while reducing inference costs. However, existing post-training quantization (PTQ) techniques for quantizing weights and activations of LLMs still suffer from non-negligible accuracy drops, especially on massive multitask language understanding. To address this issue, we propose Low-Rank Quantization (LRQ) - a simple yet effective post-training weight quantization method for LLMs that reconstructs the outputs of an intermediate Transformer block by leveraging low-rank weight-scaling matrices, replacing the conventional full weight-scaling matrices that entail as many learnable scales as their associated weights. Thanks to parameter sharing via low-rank structure, LRQ only needs to learn significantly fewer parameters while enabling the individual scaling of weights, thus boosting the generalization capability of quantized LLMs. We show the superiority of LRQ over prior LLM PTQ works under (i) 8-bit weight and per-tensor activation quantization, (ii) 4-bit weight and 8-bit per-token activation quantization, and (iii) low-bit weight-only quantization schemes. Our code is available at Software.
Problem

Research questions and friction points this paper is trying to address.

Optimizes post-training quantization for large language models
Reduces accuracy drops in multitask language understanding
Enhances generalization with low-rank weight-scaling matrices
Innovation

Methods, ideas, or system contributions that make the work stand out.

Low-Rank Quantization (LRQ)
reduces learnable parameters
improves generalization capability
🔎 Similar Papers
No similar papers found.