🤖 AI Summary
This work proposes a fine-grained post-training quantization strategy for large vision-language models (LVLMs) that addresses the limitations of existing approaches, which assess sensitivity only at the modality level and fail to capture cross-token interactions and quantization error propagation. By introducing axiomatic attribution—a concept from mechanistic interpretability—into LVLM quantization, the method employs Quantization-aware Integrated Gradients (QIG) to quantitatively evaluate token-level sensitivity while accounting for both intra- and inter-modal dynamics, enabling precise calibration. Evaluated under low-bit settings such as W4A8 and W3A16, the approach significantly improves accuracy across multiple LVLMs: for instance, LLaVA-OneVision-7B achieves a 1.60% average accuracy gain under 3-bit weight quantization, narrowing the gap to its full-precision counterpart to just 1.33%, with negligible latency overhead.
📝 Abstract
Large Vision Language Models (LVLMs) have achieved remarkable success in a range of downstream tasks that require multimodal interaction, but their capabilities come with substantial computational and memory overhead, which hinders practical deployment. Among numerous acceleration techniques, post-training quantization is a popular and effective strategy for reducing memory cost and accelerating inference. However, existing LVLM quantization methods typically measure token sensitivity at the modality level, which fails to capture the complex cross-token interactions and falls short in quantitatively measuring the quantization error at the token level. As tokens interact within the model, the distinction between modalities gradually diminishes, suggesting the need for fine-grained calibration. Inspired by axiomatic attribution in mechanistic interpretability, we introduce a fine-grained quantization strategy on Quantization-aware Integrated Gradients (QIG), which leverages integrated gradients to quantitatively evaluate token sensitivity and push the granularity from modality level to token level, reflecting both inter-modality and intra-modality dynamics. Extensive experiments on multiple LVLMs under both W4A8 and W3A16 settings show that our method improves accuracy across models and benchmarks with negligible latency overhead. For example, under 3-bit weight-only quantization, our method improves the average accuracy of LLaVA-onevision-7B by 1.60%, reducing the gap to its full-precision counterpart to only 1.33%. The code is available at https://github.com/ucas-xiang/QIG.