A Survey of Low-bit Large Language Models: Basics, Systems, and Algorithms

📅 2024-09-25
🏛️ arXiv.org
📈 Citations: 19
Influential: 1
📄 PDF
🤖 AI Summary
To address excessive memory and computational overhead in large language model (LLM) deployment, this paper presents the first unified survey of low-bit quantization techniques from three complementary perspectives: foundational theory, systems implementation, and algorithmic design. We propose a structured taxonomy covering emerging numeric formats (e.g., INT1–INT4, FP8/FP6), quantization strategies (layer-wise, grouped, and mixed-precision), and calibration and fine-tuning algorithms. We systematically evaluate mainstream toolchains—including vLLM, AWQ, and GPTQ—and synthesize over 100 state-of-the-art works. Our analysis reveals that low-bit LLMs achieve 3–8× memory compression and 1.5–3× inference speedup, while exposing critical trade-offs in hardware compatibility, numerical stability, and training robustness. The survey establishes theoretical foundations and practical guidelines for efficient, reproducible low-bit LLM deployment.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have achieved remarkable advancements in natural language processing, showcasing exceptional performance across various tasks. However, the expensive memory and computational requirements present significant challenges for their practical deployment. Low-bit quantization has emerged as a critical approach to mitigate these challenges by reducing the bit-width of model parameters, activations, and gradients, thus decreasing memory usage and computational demands. This paper presents a comprehensive survey of low-bit quantization methods tailored for LLMs, covering the fundamental principles, system implementations, and algorithmic strategies. An overview of basic concepts and new data formats specific to low-bit LLMs is first introduced, followed by a review of frameworks and systems that facilitate low-bit LLMs across various hardware platforms. Then, we categorize and analyze techniques and toolkits for efficient low-bit training and inference of LLMs. Finally, we conclude with a discussion of future trends and potential advancements of low-bit LLMs. Our systematic overview from basic, system, and algorithm perspectives can offer valuable insights and guidelines for future works to enhance the efficiency and applicability of LLMs through low-bit quantization.
Problem

Research questions and friction points this paper is trying to address.

Reducing LLM memory and computational demands via low-bit quantization
Surveying quantization methods for efficient LLM training and inference
Exploring system implementations and algorithms for low-bit LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Low-bit quantization reduces model parameters and activations
System implementations support low-bit LLMs on hardware
Efficient training and inference toolkits for low-bit LLMs
🔎 Similar Papers
No similar papers found.
R
Ruihao Gong
Beihang University, 37 Xueyuan Road, Haidian District, 100191, Beijing, China
Y
Yifu Ding
Beihang University, 37 Xueyuan Road, Haidian District, 100191, Beijing, China
Zining Wang
Zining Wang
Beihang University
Chengtao Lv
Chengtao Lv
Nanyang Technological University
Efficient AI
X
Xingyu Zheng
Beihang University, 37 Xueyuan Road, Haidian District, 100191, Beijing, China
J
Jinyang Du
Beihang University, 37 Xueyuan Road, Haidian District, 100191, Beijing, China
Haotong Qin
Haotong Qin
ETH Zürich
TinyMLModel CompressionComputer VisionDeep Learning
Jinyang Guo
Jinyang Guo
The University of Sydney
Deep LearningEfficient MethodsEdge Computing
Michele Magno
Michele Magno
ETH Zurich
Wireless sensor networksSmart Sensors and Internet of ThingsWake up RadioPower managementEnergy harvesters
X
Xianglong Liu
Beihang University, 37 Xueyuan Road, Haidian District, 100191, Beijing, China