Fine-Grained Post-Training Quantization for Large Vision Language Models with Quantization-Aware Integrated Gradients

📅 2026-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a fine-grained post-training quantization strategy for large vision-language models (LVLMs) that addresses the limitations of existing approaches, which assess sensitivity only at the modality level and fail to capture cross-token interactions and quantization error propagation. By introducing axiomatic attribution—a concept from mechanistic interpretability—into LVLM quantization, the method employs Quantization-aware Integrated Gradients (QIG) to quantitatively evaluate token-level sensitivity while accounting for both intra- and inter-modal dynamics, enabling precise calibration. Evaluated under low-bit settings such as W4A8 and W3A16, the approach significantly improves accuracy across multiple LVLMs: for instance, LLaVA-OneVision-7B achieves a 1.60% average accuracy gain under 3-bit weight quantization, narrowing the gap to its full-precision counterpart to just 1.33%, with negligible latency overhead.

Technology Category

Application Category

📝 Abstract
Large Vision Language Models (LVLMs) have achieved remarkable success in a range of downstream tasks that require multimodal interaction, but their capabilities come with substantial computational and memory overhead, which hinders practical deployment. Among numerous acceleration techniques, post-training quantization is a popular and effective strategy for reducing memory cost and accelerating inference. However, existing LVLM quantization methods typically measure token sensitivity at the modality level, which fails to capture the complex cross-token interactions and falls short in quantitatively measuring the quantization error at the token level. As tokens interact within the model, the distinction between modalities gradually diminishes, suggesting the need for fine-grained calibration. Inspired by axiomatic attribution in mechanistic interpretability, we introduce a fine-grained quantization strategy on Quantization-aware Integrated Gradients (QIG), which leverages integrated gradients to quantitatively evaluate token sensitivity and push the granularity from modality level to token level, reflecting both inter-modality and intra-modality dynamics. Extensive experiments on multiple LVLMs under both W4A8 and W3A16 settings show that our method improves accuracy across models and benchmarks with negligible latency overhead. For example, under 3-bit weight-only quantization, our method improves the average accuracy of LLaVA-onevision-7B by 1.60%, reducing the gap to its full-precision counterpart to only 1.33%. The code is available at https://github.com/ucas-xiang/QIG.
Problem

Research questions and friction points this paper is trying to address.

Large Vision Language Models
post-training quantization
token-level sensitivity
quantization error
fine-grained calibration
Innovation

Methods, ideas, or system contributions that make the work stand out.

post-training quantization
fine-grained quantization
integrated gradients
vision-language models
token-level sensitivity
🔎 Similar Papers
No similar papers found.
Z
Ziwei Xiang
State Key Laboratory of Multimodal Artificial Intelligence Systems, CASIA; School of Artificial Intelligence, UCAS
Fanhu Zeng
Fanhu Zeng
Institute of Automation, Chinese Academy of Sciences
Multimodal LLMTrustworthy AIEfficient Learning
H
Hongjian Fang
Beijing National Research Center for Information Science and Technology
R
Rui-Qi Wang
Institute of Artificial Intelligence, USTB
R
Renxing Chen
State Key Laboratory of Multimodal Artificial Intelligence Systems, CASIA; School of Artificial Intelligence, UCAS
Y
Yanan Zhu
School of Artificial Intelligence, Beihang University
Yi Chen
Yi Chen
Institute of Automation, Chinese Academy of Sciences
Character RecognitionAi4ScienceLarge Language Models
P
Peipei Yang
State Key Laboratory of Multimodal Artificial Intelligence Systems, CASIA; School of Artificial Intelligence, UCAS
Xu-Yao Zhang
Xu-Yao Zhang
Institute of Automation, Chinese Academy of Sciences
Pattern RecognitionMachine LearningOCR