🤖 AI Summary
Addressing two key bottlenecks of post-training quantization (PTQ)—strong dependence on calibration data and inflexible bit-width configuration—this paper proposes RaanA, a unified quantization framework. Methodologically, RaanA introduces: (1) RaBitQ-H, the first stochastic vector quantization variant, drastically reducing calibration data requirements; (2) AllocateBits, a gradient-aware layer sensitivity modeling and optimization-driven algorithm enabling dynamic, fine-grained per-layer bit-width allocation; and (3) low-rank error compensation to enhance quantization accuracy. Evaluated on LLaMA and OPT models, RaanA achieves state-of-the-art performance with only 32 calibration samples per layer under W4A4 quantization, incurring an average task degradation of merely ~1.2%. It accelerates quantization by 5–10× and supports arbitrary bit-width combinations, offering unprecedented flexibility and efficiency.
📝 Abstract
Post-training Quantization (PTQ) has become a widely used technique for improving inference efficiency of large language models (LLMs). However, existing PTQ methods generally suffer from crucial limitations such as heavy calibration data requirements and inflexible choice of target number of bits. In this paper, we propose RaanA, a unified PTQ framework that overcomes these challenges by introducing two novel components: 1) RaBitQ-H, a variant of a randomized vector quantization method RaBitQ, designed for fast, accurate, and highly efficient quantization; and 2) AllocateBits, an algorithm that optimally allocates bit-widths across layers based on their quantization sensitivity. RaanA achieves competitive performance with state-of-the-art quantization methods while being extremely fast, requiring minimal calibration data, and enabling flexible bit allocation. Extensive experiments demonstrate RaanA's efficacy in balancing efficiency and accuracy. The code is publicly available at https://github.com/FFTYYY/RaanA .