RaanA: A Fast, Flexible, and Data-Efficient Post-Training Quantization Algorithm

📅 2025-03-29
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Addressing two key bottlenecks of post-training quantization (PTQ)—strong dependence on calibration data and inflexible bit-width configuration—this paper proposes RaanA, a unified quantization framework. Methodologically, RaanA introduces: (1) RaBitQ-H, the first stochastic vector quantization variant, drastically reducing calibration data requirements; (2) AllocateBits, a gradient-aware layer sensitivity modeling and optimization-driven algorithm enabling dynamic, fine-grained per-layer bit-width allocation; and (3) low-rank error compensation to enhance quantization accuracy. Evaluated on LLaMA and OPT models, RaanA achieves state-of-the-art performance with only 32 calibration samples per layer under W4A4 quantization, incurring an average task degradation of merely ~1.2%. It accelerates quantization by 5–10× and supports arbitrary bit-width combinations, offering unprecedented flexibility and efficiency.

Technology Category

Application Category

📝 Abstract
Post-training Quantization (PTQ) has become a widely used technique for improving inference efficiency of large language models (LLMs). However, existing PTQ methods generally suffer from crucial limitations such as heavy calibration data requirements and inflexible choice of target number of bits. In this paper, we propose RaanA, a unified PTQ framework that overcomes these challenges by introducing two novel components: 1) RaBitQ-H, a variant of a randomized vector quantization method RaBitQ, designed for fast, accurate, and highly efficient quantization; and 2) AllocateBits, an algorithm that optimally allocates bit-widths across layers based on their quantization sensitivity. RaanA achieves competitive performance with state-of-the-art quantization methods while being extremely fast, requiring minimal calibration data, and enabling flexible bit allocation. Extensive experiments demonstrate RaanA's efficacy in balancing efficiency and accuracy. The code is publicly available at https://github.com/FFTYYY/RaanA .
Problem

Research questions and friction points this paper is trying to address.

Reduces heavy calibration data needs for LLM quantization
Enables flexible bit allocation across neural network layers
Improves speed and accuracy of post-training quantization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses RaBitQ-H for fast accurate quantization
Allocates bits optimally across layers via AllocateBits
Enables flexible bit allocation with minimal calibration data
🔎 Similar Papers
No similar papers found.
Yongyi Yang
Yongyi Yang
University of Michigan
Machine learningGraph neural networks
J
Jianyang Gao
College of Computing and Data Science, Nanyang Technological University, Singapore
W
Wei Hu
Computer Science and Engineering, University of Michigan, Ann Arbor, MI, USA