ABQ-LLM: Arbitrary-Bit Quantized Inference Acceleration for Large Language Models

📅 2024-08-16
🏛️ AAAI Conference on Artificial Intelligence
📈 Citations: 8
Influential: 2
📄 PDF
🤖 AI Summary
To address the dual challenges of performance degradation in low-bit quantization of large language models (LLMs) and hardware limitations—specifically, GPU integer compute units supporting only INT4/INT8 arithmetic, which hinders acceleration of mixed-precision matrix multiplication—this paper proposes an arbitrary-bit quantization inference acceleration framework. Methodologically, it introduces: (1) a novel distribution correction technique to mitigate post-training quantization (PTQ) accuracy loss; (2) a bit-balancing strategy enabling coordinated quantization of weights and activations; and (3) Binary TensorCore-equivalent reconstruction, overcoming hardware precision constraints to support flexible mixed-precision deployments such as W2A8. Evaluated on LLaMA-7B, the W2A8 configuration achieves a WikiText2 perplexity of 7.59—2.17 lower than AffineQuant—with 1.6× inference speedup and 2.7× memory compression.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have revolutionized natural language processing tasks. However, their practical application is constrained by substantial memory and computational demands. Post-training quantization (PTQ) is considered an effective method to accelerate LLM inference. Despite its growing popularity in LLM model compression, PTQ deployment faces two major challenges. First, low-bit quantization leads to performance degradation. Second, restricted by the limited integer computing unit type on GPUs, quantized matrix operations with different precisions cannot be effectively accelerated. To address these issues, we introduce a novel arbitrary-bit quantization algorithm and inference framework, ABQ-LLM. It achieves superior performance across various quantization settings and enables efficient arbitrary-precision quantized inference on the GPU. ABQ-LLM introduces several key innovations: (1) a distribution correction method for transformer blocks to mitigate distribution differences caused by full quantization of weights and activations, improving performance at low bit-widths. (2) the bit balance strategy to counteract performance degradation from asymmetric distribution issues at very low bit-widths (e.g., 2-bit). (3) an innovative quantization acceleration framework that reconstructs the quantization matrix multiplication of arbitrary precision combinations based on BTC (Binary TensorCore) equivalents, gets rid of the limitations of INT4/INT8 computing units. ABQ-LLM can convert each component bit width gain into actual acceleration gain, maximizing performance under mixed precision(e.g., W6A6, W2A8). Based on W2*A8 quantization configuration on LLaMA-7B model, it achieved a WikiText2 perplexity of 7.59 (2.17⬇ vs 9.76 in AffineQuant). Compared to SmoothQuant, we realized 1.6x acceleration improvement and 2.7x memory compression gain.
Problem

Research questions and friction points this paper is trying to address.

Reducing performance degradation in low-bit LLM quantization
Enabling efficient arbitrary-precision quantized GPU inference
Overcoming limitations of fixed integer computing units
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distribution correction for transformer blocks
Bit balance strategy for low bit-widths
BTC-based arbitrary-precision quantization framework
🔎 Similar Papers
C
Chao Zeng
ByteDance Inc.
S
Songwei Liu
ByteDance Inc.
Y
Yusheng Xie
ByteDance Inc.
H
Hong Liu
ByteDance Inc.
Xiaojian Wang
Xiaojian Wang
Assistant Professor, University of Colorado Denver
Satellite NetworkEdge ComputingSecurityPayment Channel NetworkBlockchain
M
Miao Wei
ByteDance Inc.
S
Shu Yang
ByteDance Inc.
F
Fangmin Chen
ByteDance Inc.
Xing Mei
Xing Mei
Bytedance Inc.
Computer VisionComputer GraphicsImage Processing