🤖 AI Summary
Large language models (LLMs) suffer significant accuracy degradation under ultra-low-bit post-training quantization (PTQ) due to high-impact parameters; existing approaches retain FP16 precision for a fixed proportion of parameters across all layers, ignoring inter-layer sensitivity variations. To address this, we propose the first layer-wise optimization framework that explicitly models sensitivity dependence: it dynamically allocates the proportion of high-impact parameters per layer via quadratic optimization and introduces a hybrid quantization strategy—applying high-fidelity quantization to high-impact parameters while using lightweight quantization for the rest. This is the first method to perform layer-aware parameter importance reallocation under fixed resource constraints, substantially reducing quantization error. Experiments demonstrate state-of-the-art performance at 2–4 bits, achieving superior accuracy without compromising computational efficiency.
📝 Abstract
Large language models (LLMs) have significantly advanced natural language processing, but their massive parameter counts create substantial computational and memory challenges during deployment. Post-training quantization (PTQ) has emerged as a promising approach to mitigate these challenges with minimal overhead. While existing PTQ methods can effectively quantize LLMs, they experience substantial accuracy loss at extremely low bit-widths, primarily due to high-impact parameters that significantly influence quantization performance. Several approaches address these issues by identifying and retaining the high-impact parameters in FP16 format. However, they apply fixed ratios of high-impact parameters across all layers, overlooking layer-wise sensitivity variations. In this paper, we propose a quadratic optimization framework that determines layer-specific ratios of high-impact parameters while considering inter-layer dependencies. We quantize high-impact parameters to moderate bit-widths, which often result in negligible performance degradation in quantized LLMs, while the remaining parameters can be quantized to extremely low bit-widths. Under the same resource-constrained budget, this allows for preserving more high-impact parameters than methods that keep selecting a few in FP16 format. Additionally, the proposed framework allows us to leverage an advanced quantization method that often requires extensive learnable parameters solely for high-impact parameters, while applying a computationally efficient method to the rest. Our approach achieves an effective balance between computational efficiency and model accuracy while maintaining high performance compared to state-of-the-art methods.