Multi-Cue Adaptive Visual Token Pruning for Large Vision-Language Models

📅 2025-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing visual token pruning methods for Large Vision-Language Models (LVLMs) over-rely on attention scores, neglecting spatial locality and semantic redundancy in visual representations. Method: This paper proposes a training-free, plug-and-play pruning framework that jointly leverages three complementary cues—attention scores, Euclidean spatial distances, and cosine semantic similarities—within a unified formulation. We further introduce an adaptive non-maximum suppression (NMS) mechanism to dynamically balance positional bias and feature redundancy. Contribution/Results: Our method achieves state-of-the-art performance across multiple LVLM architectures and standard benchmarks. It consistently delivers higher vision-language understanding accuracy under varying pruning ratios, accelerates inference by up to 2.1×, and reduces GPU memory consumption by 43%, without requiring any fine-tuning or architectural modification.

Technology Category

Application Category

📝 Abstract
As the computational needs of Large Vision-Language Models (LVLMs) increase, visual token pruning has proven effective in improving inference speed and memory efficiency. Traditional pruning methods in LVLMs predominantly focus on attention scores to determine token relevance, overlooking critical aspects such as spatial position and token similarity. To this end, we introduce AdaptPrune, a novel plug-and-play training-free pruning method that builds on conventional attention-based pruning by integrating spatial distance and token similarity with an adaptive NMS approach. Our method is based on several observed phenomena in large models: the positional bias in the model's image attention and the redundancy of token information ignored by previous approaches. By integrating attention, spatial, and similarity information, our approach ensures a comprehensive evaluation of token importance and substantially refines the pruning decisions. Our method has been extensively tested across various LVLMs and benchmarks, confirming its robustness and adaptability. The results demonstrate that AdaptPrune consistently outperforms existing methods across various pruning ratios. Code is available at https://github.com/bzluan/AdaptPrune.
Problem

Research questions and friction points this paper is trying to address.

Improves inference speed and memory efficiency in LVLMs
Addresses limitations of traditional attention-based pruning methods
Integrates spatial distance, token similarity, and adaptive NMS
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates spatial distance and token similarity
Uses adaptive NMS for comprehensive token evaluation
Enhances pruning decisions without additional training
🔎 Similar Papers
No similar papers found.