🤖 AI Summary
Existing compensation-based quantization methods for large language models suffer from inaccurate calibration targets, leading to inadequate modeling of residual errors and degraded quantization accuracy. This work redefines the calibration objective as aligning with the full-precision model’s output and introduces the concept of “compensation-aware error” to explicitly capture the additional error introduced by the discrepancy between compensated and original weights. By leveraging neuron decomposition, this error term is efficiently integrated into the quantization process, enabling seamless compatibility with both GPTQ and GPAQ frameworks. Extensive experiments demonstrate that the proposed approach consistently achieves significant performance gains across various state-of-the-art large language models and ultra-low-bit settings, highlighting its effectiveness and broad applicability.
📝 Abstract
Methods based on weight compensation, which iteratively apply quantization and weight compensation to minimize the output error, have recently demonstrated remarkable success in quantizing Large Language Models (LLMs). The representative work, GPTQ, introduces several key techniques that make such iterative methods practical for LLMs with billions of parameters. GPTAQ extends this approach by introducing an asymmetric calibration process that aligns the output of each quantized layer with its full-precision counterpart, incorporating a residual error into the weight compensation framework. In this work, we revisit the formulation of the residual error. We identify a sub-optimal calibration objective in existing methods: during the intra-layer calibration process, they align the quantized output with the output from compensated weights, rather than the true output from the original full-precision model. Therefore, we redefine the objective to precisely align the quantized model's output with the original output of the full-precision model at each step. We then reveal that the residual error originates not only from the output difference of the preceding layer but also from the discrepancy between the compensated and original weights within each layer, which we name the 'compensation-aware error'. By inheriting the neuron decomposition technique from GPTAQ, we can efficiently incorporate this compensation-aware error into the weight update process. Extensive experiments on various LLMs and quantization settings demonstrate that our proposed enhancements integrate seamlessly with both GPTQ and GPTAQ, significantly improving their quantization performance. Our code is publicly available at https://github.com/list0830/ResComp.