🤖 AI Summary
This work addresses the significant accuracy degradation in traditional post-training quantization (PTQ) of large language models, which stems from the inability to accurately model activation distributions during inference due to limited calibration samples. To overcome this limitation, the authors propose the FAQ framework, which leverages prior knowledge from sibling large language models to regenerate high-fidelity calibration data through chain-of-thought reasoning. Furthermore, FAQ introduces an expert-guided inter-group competition mechanism and a renormalization strategy to enhance the representativeness and generalizability of the calibration samples. Experimental results demonstrate that, on models such as Qwen3-8B, FAQ reduces quantization-induced accuracy loss by up to 28.5% compared to using the original calibration data.
📝 Abstract
Although post-training quantization (PTQ) provides an efficient numerical compression scheme for deploying large language models (LLMs) on resource-constrained devices, the representativeness and universality of calibration data remain a core bottleneck in determining the accuracy of quantization parameters. Traditional PTQ methods typically rely on limited samples, making it difficult to capture the activation distribution during the inference phase, leading to biases in quantization parameters. To address this, we propose \textbf{FAQ} (Family-Aware Quantization), a calibration data regeneration framework that leverages prior knowledge from LLMs of the same family to generate high-fidelity calibration samples. Specifically, FAQ first inputs the original calibration samples into a larger LLM from the same family as the target model, regenerating a series of high-fidelity calibration data using a highly consistent knowledge system. Subsequently, this data, carrying Chain-of-Thought reasoning and conforming to the expected activation distribution, undergoes group competition under expert guidance to select the best samples, which are then re-normalized to enhance the effectiveness of standard PTQ. Experiments on multiple model series, including Qwen3-8B, show that FAQ reduces accuracy loss by up to 28.5\% compared to the baseline with original calibration data, demonstrating its powerful potential and contribution.