🤖 AI Summary
This work addresses the high computational and memory costs incurred during inference in large vision-language models due to processing dense visual tokens. The authors propose a training-free visual token pruning framework that formulates pruning as a text-guided subspace reconstruction problem. By employing a residual energy-guided greedy strategy combined with text-relevance weighting, the method dynamically selects a compact subset of informative and task-relevant tokens, preserving cross-modal alignment and geometric structure while significantly improving efficiency. The approach is model-agnostic and lightweight, consistently outperforming existing pruning techniques across mainstream architectures—including LLaVA-1.5, LLaVA-NeXT, and Qwen2.5-VL—thereby effectively reducing computational load, memory consumption, and inference latency.
📝 Abstract
Large Vision-Language Models (LVLMs) rely on dense visual tokens to capture fine-grained visual information, but processing all these tokens incurs substantial computational and memory overhead during inference. To address this issue, we propose ResPrune, a training-free visual token pruning framework that enables efficient LVLM inference by selecting a compact yet informative subset of visual tokens. ResPrune formulates visual token pruning as a subspace reconstruction problem and employs a greedy subspace expansion strategy guided by residual energy, allowing it to preserve the geometric structure of the original visual token space. To further incorporate cross modal alignment, the selection process is conditioned on textual relevance, encouraging the retention of tokens that are both informative and instruction-relevant. The proposed method is lightweight and model-agnostic, and can be seamlessly integrated into existing LVLM pipelines without retraining or architectural modifications. Extensive experiments on multiple LVLM backbones, including LLaVA-1.5, LLaVA-NeXT, and Qwen2.5-VL, demonstrate that ResPrune consistently outperforms existing pruning approaches across a wide range of benchmarks, while achieving effective reductions in computation, memory consumption, and inference latency.