AdaptInfer: Adaptive Token Pruning for Vision-Language Model Inference with Dynamical Text Guidance

📅 2025-08-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational overhead of visual token processing during the prefill phase of vision-language models (VLMs) and the limitations of existing pruning methods—which rely on static prompts or coarse-grained attention patterns—this paper proposes a dynamic text-guided adaptive pruning framework. Methodologically, it leverages inter-layer text-to-text attention maps as soft priors, designs an efficient pruning schedule informed by cross-modal attention dynamics, and incorporates offline inflection-point detection for fine-grained importance scoring. The framework is plug-and-play and generalizes across multiple downstream tasks. Evaluated on LLaVA-1.5-7B, it achieves a 61.3% reduction in CUDA latency during prefill while preserving 92.9% of the original average accuracy. Under identical token budgets, it outperforms state-of-the-art methods in accuracy.

Technology Category

Application Category

📝 Abstract
Vision-language models (VLMs) have achieved impressive performance on multimodal reasoning tasks such as visual question answering (VQA), but their inference cost remains a significant challenge due to the large number of vision tokens processed during the prefill stage. Existing pruning methods often rely on directly using the attention patterns or static text prompt guidance, failing to exploit the dynamic internal signals generated during inference. To address these issues, we propose AdaptInfer, a plug-and-play framework for adaptive vision token pruning in VLMs. First, we introduce a fine-grained, dynamic text-guided pruning mechanism that reuses layer-wise text-to-text attention maps to construct soft priors over text-token importance, allowing more informed scoring of vision tokens at each stage. Second, we perform an offline analysis of cross-modal attention shifts and identify consistent inflection locations in inference, which inspire us to propose a more principled and efficient pruning schedule. Our method is lightweight and plug-and-play, also generalizable across multi-modal tasks. Experimental results have verified the effectiveness of the proposed method. For example, it reduces CUDA latency by 61.3% while maintaining an average accuracy of 92.9% on vanilla LLaVA-1.5-7B. Under the same token budget, AdaptInfer surpasses SOTA in accuracy.
Problem

Research questions and friction points this paper is trying to address.

Reduces vision token processing cost in VLMs
Improves pruning using dynamic text guidance signals
Enhances efficiency while maintaining high task accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic text-guided pruning using attention maps
Offline analysis of cross-modal attention shifts
Lightweight plug-and-play framework for VLMs
🔎 Similar Papers
No similar papers found.
Weichen Zhang
Weichen Zhang
PhD, University of Sydney
Computer VisionDeep LearningTransfer LearningDomain Adaptation
Z
Zhui Zhu
Department of Automation, Tsinghua University, Beijing, China
N
Ningbo Li
Global Innovation Exchange, Tsinghua University, Beijing, China
K
Kebin Liu
Global Innovation Exchange, Tsinghua University, Beijing, China
Yunhao Liu
Yunhao Liu
ACM Fellow, IEEE Fellow, CCF Fellow, Tsinghua University
Wireless Sensor Networks/RFIDCyber Physical Systems and IoTPrivacy and SecurityCloud Computing