A Comprehensive Evaluation on Quantization Techniques for Large Language Models

📅 2025-07-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing post-training quantization (PTQ) methods for large language models (LLMs) lack standardized evaluation and theoretical understanding. Method: This paper establishes a standardized experimental platform and proposes a decoupled two-stage analytical framework—“pre-quantization transformation” followed by “error mitigation”—systematically decomposing PTQ into orthogonal preprocessing and error-compensation modules. Contribution/Results: Through extensive ablation studies, we find that optimal pre-transformations (e.g., rotation + scaling optimization) are not transferable between INT4 and MXFP4 formats; low-rank compensation can occasionally outperform standalone GPTQ when combined with it; and MXFP4 necessitates format-specific preprocessing. The work uncovers intrinsic theoretical connections among mainstream PTQ approaches, providing interpretable, reusable, component-level design principles. It shifts PTQ development from empirical tuning toward principled, theory-driven engineering.

Technology Category

Application Category

📝 Abstract
For large language models (LLMs), post-training quantization (PTQ) can significantly reduce memory footprint and computational overhead. Model quantization is a rapidly evolving research field. Though many papers have reported breakthrough performance, they may not conduct experiments on the same ground since one quantization method usually contains multiple components. In addition, analyzing the theoretical connections among existing methods is crucial for in-depth understanding. To bridge these gaps, we conduct an extensive review of state-of-the-art methods and perform comprehensive evaluations on the same ground to ensure fair comparisons. To our knowledge, this fair and extensive investigation remains critically important yet underexplored. To better understand the theoretical connections, we decouple the published quantization methods into two steps: pre-quantization transformation and quantization error mitigation. We define the former as a preprocessing step applied before quantization to reduce the impact of outliers, making the data distribution flatter and more suitable for quantization. Quantization error mitigation involves techniques that offset the errors introduced during quantization, thereby enhancing model performance. We evaluate and analyze the impact of different components of quantization methods. Additionally, we analyze and evaluate the latest MXFP4 data format and its performance. Our experimental results demonstrate that optimized rotation and scaling yield the best performance for pre-quantization transformation, and combining low-rank compensation with GPTQ occasionally outperforms using GPTQ alone for quantization error mitigation. Furthermore, we explore the potential of the latest MXFP4 quantization and reveal that the optimal pre-quantization transformation strategy for INT4 does not generalize well to MXFP4, inspiring further investigation.
Problem

Research questions and friction points this paper is trying to address.

Evaluate quantization techniques for large language models
Compare methods fairly on same experimental ground
Analyze theoretical connections between quantization components
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extensive review of state-of-the-art quantization methods
Decoupling quantization into pre-quantization and error mitigation
Evaluating MXFP4 data format and transformation strategies
🔎 Similar Papers
No similar papers found.
Y
Yutong Liu
Department of Computer Science and Technology, Tongji University, Shanghai 201804, China
Cairong Zhao
Cairong Zhao
Tongji University
deep learningcomputer visionperson re-id
G
Guosheng Hu
School of Engineering Math and Technology, University of Bristol, BS8 1QU, UK