FAQ: Mitigating Quantization Error via Regenerating Calibration Data with Family-Aware Quantization

📅 2026-01-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the significant accuracy degradation in traditional post-training quantization (PTQ) of large language models, which stems from the inability to accurately model activation distributions during inference due to limited calibration samples. To overcome this limitation, the authors propose the FAQ framework, which leverages prior knowledge from sibling large language models to regenerate high-fidelity calibration data through chain-of-thought reasoning. Furthermore, FAQ introduces an expert-guided inter-group competition mechanism and a renormalization strategy to enhance the representativeness and generalizability of the calibration samples. Experimental results demonstrate that, on models such as Qwen3-8B, FAQ reduces quantization-induced accuracy loss by up to 28.5% compared to using the original calibration data.

Technology Category

Application Category

📝 Abstract
Although post-training quantization (PTQ) provides an efficient numerical compression scheme for deploying large language models (LLMs) on resource-constrained devices, the representativeness and universality of calibration data remain a core bottleneck in determining the accuracy of quantization parameters. Traditional PTQ methods typically rely on limited samples, making it difficult to capture the activation distribution during the inference phase, leading to biases in quantization parameters. To address this, we propose \textbf{FAQ} (Family-Aware Quantization), a calibration data regeneration framework that leverages prior knowledge from LLMs of the same family to generate high-fidelity calibration samples. Specifically, FAQ first inputs the original calibration samples into a larger LLM from the same family as the target model, regenerating a series of high-fidelity calibration data using a highly consistent knowledge system. Subsequently, this data, carrying Chain-of-Thought reasoning and conforming to the expected activation distribution, undergoes group competition under expert guidance to select the best samples, which are then re-normalized to enhance the effectiveness of standard PTQ. Experiments on multiple model series, including Qwen3-8B, show that FAQ reduces accuracy loss by up to 28.5\% compared to the baseline with original calibration data, demonstrating its powerful potential and contribution.
Problem

Research questions and friction points this paper is trying to address.

post-training quantization
calibration data
quantization error
activation distribution
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Family-Aware Quantization
Post-Training Quantization
Calibration Data Regeneration
Chain-of-Thought Reasoning
Activation Distribution
🔎 Similar Papers
No similar papers found.
H
Haiyang Xiao
Alibaba Cloud Computing
W
Weiqing Li
Alibaba Cloud Computing
J
Jinyue Guo
Alibaba Cloud Computing, Chinese Academy of Sciences
Guochao Jiang
Guochao Jiang
Fudan University, Alibaba Group
Natural Language ProcessingLarge Language Models
G
Guohua Liu
Alibaba Cloud Computing
Yuewei Zhang
Yuewei Zhang
Alibaba Cloud
llm