🤖 AI Summary
This work addresses the performance degradation commonly observed in low-bit post-training quantization (PTQ) of large language models, which often stems from mismatches in activation distributions. To mitigate this issue, the authors propose a regularized asymmetric calibration method that reformulates the calibration process as a regularization technique by interpolating between symmetric and asymmetric calibration strategies. They further introduce a sequential rounding procedure along with a bounded-search extension, which effectively balances quantization accuracy and computational overhead while preserving the quadratic optimization structure inherent to PTQ. Extensive experiments demonstrate that the proposed approach consistently outperforms existing PTQ methods across multiple large language models, bit widths, and evaluation benchmarks, achieving notable improvements in both perplexity and task accuracy with only a modest and controllable increase in computational cost.
📝 Abstract
Large language models (LLMs) deliver robust performance across diverse applications, yet their deployment often faces challenges due to the memory and latency costs of storing and accessing billions of parameters. Post-training quantization (PTQ) enables efficient inference by mapping pretrained weights to low-bit formats without retraining, but its effectiveness depends critically on both the quantization objective and the rounding procedure used to obtain low-bit weight representations. In this work, we show that interpolating between symmetric and asymmetric calibration acts as a form of regularization that preserves the standard quadratic structure used in PTQ while providing robustness to activation mismatch. Building on this perspective, we derive a simple successive rounding procedure that naturally incorporates asymmetric calibration, as well as a bounded-search extension that allows for an explicit trade-off between quantization quality and the compute cost. Experiments across multiple LLM families, quantization bit-widths, and benchmarks demonstrate that the proposed bounded search based on a regularized asymmetric calibration objective consistently improves perplexity and accuracy over PTQ baselines, while incurring only modest and controllable additional computational cost.