🤖 AI Summary
To address the high computational overhead of visual token processing during the prefill phase of vision-language models (VLMs) and the limitations of existing pruning methods—which rely on static prompts or coarse-grained attention patterns—this paper proposes a dynamic text-guided adaptive pruning framework. Methodologically, it leverages inter-layer text-to-text attention maps as soft priors, designs an efficient pruning schedule informed by cross-modal attention dynamics, and incorporates offline inflection-point detection for fine-grained importance scoring. The framework is plug-and-play and generalizes across multiple downstream tasks. Evaluated on LLaVA-1.5-7B, it achieves a 61.3% reduction in CUDA latency during prefill while preserving 92.9% of the original average accuracy. Under identical token budgets, it outperforms state-of-the-art methods in accuracy.
📝 Abstract
Vision-language models (VLMs) have achieved impressive performance on multimodal reasoning tasks such as visual question answering (VQA), but their inference cost remains a significant challenge due to the large number of vision tokens processed during the prefill stage. Existing pruning methods often rely on directly using the attention patterns or static text prompt guidance, failing to exploit the dynamic internal signals generated during inference. To address these issues, we propose AdaptInfer, a plug-and-play framework for adaptive vision token pruning in VLMs. First, we introduce a fine-grained, dynamic text-guided pruning mechanism that reuses layer-wise text-to-text attention maps to construct soft priors over text-token importance, allowing more informed scoring of vision tokens at each stage. Second, we perform an offline analysis of cross-modal attention shifts and identify consistent inflection locations in inference, which inspire us to propose a more principled and efficient pruning schedule. Our method is lightweight and plug-and-play, also generalizable across multi-modal tasks. Experimental results have verified the effectiveness of the proposed method. For example, it reduces CUDA latency by 61.3% while maintaining an average accuracy of 92.9% on vanilla LLaVA-1.5-7B. Under the same token budget, AdaptInfer surpasses SOTA in accuracy.