Rethinking Practical and Efficient Quantization Calibration for Vision-Language Models

πŸ“… 2026-02-08
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the suboptimal calibration performance in post-training quantization of vision-language models (VLMs), which stems from the significant disparity in activation distributions between visual and textual tokens and their differing sensitivities to quantization error. To overcome this challenge, the authors propose Token-level Importance-aware Layer-wise Quantization (TLQ), a novel framework that introduces, for the first time, a gradient-based token-level importance mechanism to guide the calibration process. Furthermore, TLQ incorporates a multi-GPU collaborative layer-wise calibration pipeline that better aligns with the actual inference path. Extensive experiments demonstrate that TLQ consistently achieves substantial performance gains across two prominent VLM architectures, three model scales, and two quantization settings, thereby validating its generality and robustness.

Technology Category

Application Category

πŸ“ Abstract
Post-training quantization (PTQ) is a primary approach for deploying large language models without fine-tuning, and the quantized performance is often strongly affected by the calibration in PTQ. By contrast, in vision-language models (VLMs), substantial differences between visual and text tokens in their activation distributions and sensitivities to quantization error pose significant challenges for effective calibration during PTQ. In this work, we rethink what PTQ calibration should align with in VLMs and propose the Token-level Importance-aware Layer-wise Quantization framework (TLQ). Guided by gradient information, we design a token-level importance integration mechanism for quantization error, and use it to construct a token-level calibration set, enabling a more fine-grained calibration strategy. Furthermore, TLQ introduces a multi-GPU, quantization-exposed layer-wise calibration scheme. This scheme keeps the layer-wise calibration procedure consistent with the true quantized inference path and distributes the complex layer-wise calibration workload across multiple RTX3090 GPUs, thereby reducing reliance on the large memory of A100 GPUs. TLQ is evaluated across two models, three model scales, and two quantization settings, consistently achieving performance improvements across all settings, indicating its strong quantization stability. The code will be released publicly.
Problem

Research questions and friction points this paper is trying to address.

post-training quantization
vision-language models
quantization calibration
activation distribution
quantization error
Innovation

Methods, ideas, or system contributions that make the work stand out.

post-training quantization
vision-language models
token-level calibration
layer-wise quantization
multi-GPU calibration
πŸ”Ž Similar Papers
Z
Zhenhao Shang
Northwestern Polytechnical University, Xi’an, China
H
Haizhao Jing
Northwestern Polytechnical University, Xi’an, China
G
Guoting Wei
Nanjing University of Science and Technology, Nanjing, China
Haokui Zhang
Haokui Zhang
Northwestern Polytechnical University
Approximate nearest neighbor searchneural architecture searchdepth estimationHSI classificaion
R
Rong Xiao
Intellifusion, Shenzhen, China
J
Jianqing Gao
iFLYTEK, China
Peng Wang
Peng Wang
School of Computer Science, Northwestern Polytechnical University, China
Computer VisionMachine LearningArtificial Intelligence