GWQ: Gradient-Aware Weight Quantization for Large Language Models

📅 2024-10-30
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of deploying large language models (LLMs) on resource-constrained devices due to their massive parameter counts, this paper proposes a gradient-aware weight quantization method. The approach introduces a novel gradient-driven outlier identification mechanism that requires only a single calibration pass for low-bit compression: approximately 1% of weights most sensitive to gradients are preserved in FP16 precision, while the remainder are quantized to low-bit representations (e.g., 4-bit), drastically reducing calibration data dependency. Evaluated across multiple benchmark tasks, the method consistently outperforms state-of-the-art quantization techniques—achieving a 1.2× inference speedup and substantial memory footprint reduction—enabling efficient edge deployment. Key contributions include (1) gradient-guided outlier detection, (2) adaptive mixed-precision weight allocation, and (3) an ultra-lightweight calibration paradigm.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) show impressive performance in solving complex language tasks. However, its large number of parameters presents significant challenges for the deployment. So, compressing LLMs to low bits can enable to deploy on resource-constrained devices. To address this problem, we propose gradient-aware weight quantization (GWQ), the first quantization approach for low-bit weight quantization that leverages gradients to localize outliers, requiring only a minimal amount of calibration data for outlier detection. GWQ retains the top 1% outliers preferentially at FP16 precision, while the remaining non-outlier weights are stored in a low-bit. We widely evaluate GWQ on different task include language modeling, grounding detection, massive multitask language understanding and vision-language question and answering. Results show that models quantified by GWQ performs better than other quantization method. During quantization process, GWQ only need one calibration set to realize effective quant. Also, GWQ achieves 1.2x inference speedup in comparison to the original model and effectively reduces the inference memory.
Problem

Research questions and friction points this paper is trying to address.

Compress large language models for resource-constrained devices
Quantize weights to low bits with minimal calibration data
Improve inference speed and reduce memory usage
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gradient-aware weight quantization for LLMs
Minimal calibration data for outlier detection
Retains top 1% outliers at FP16 precision
🔎 Similar Papers
No similar papers found.