ResPrune: Text-Conditioned Subspace Reconstruction for Visual Token Pruning in Large Vision-Language Models

📅 2026-03-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational and memory costs incurred during inference in large vision-language models due to processing dense visual tokens. The authors propose a training-free visual token pruning framework that formulates pruning as a text-guided subspace reconstruction problem. By employing a residual energy-guided greedy strategy combined with text-relevance weighting, the method dynamically selects a compact subset of informative and task-relevant tokens, preserving cross-modal alignment and geometric structure while significantly improving efficiency. The approach is model-agnostic and lightweight, consistently outperforming existing pruning techniques across mainstream architectures—including LLaVA-1.5, LLaVA-NeXT, and Qwen2.5-VL—thereby effectively reducing computational load, memory consumption, and inference latency.

Technology Category

Application Category

📝 Abstract
Large Vision-Language Models (LVLMs) rely on dense visual tokens to capture fine-grained visual information, but processing all these tokens incurs substantial computational and memory overhead during inference. To address this issue, we propose ResPrune, a training-free visual token pruning framework that enables efficient LVLM inference by selecting a compact yet informative subset of visual tokens. ResPrune formulates visual token pruning as a subspace reconstruction problem and employs a greedy subspace expansion strategy guided by residual energy, allowing it to preserve the geometric structure of the original visual token space. To further incorporate cross modal alignment, the selection process is conditioned on textual relevance, encouraging the retention of tokens that are both informative and instruction-relevant. The proposed method is lightweight and model-agnostic, and can be seamlessly integrated into existing LVLM pipelines without retraining or architectural modifications. Extensive experiments on multiple LVLM backbones, including LLaVA-1.5, LLaVA-NeXT, and Qwen2.5-VL, demonstrate that ResPrune consistently outperforms existing pruning approaches across a wide range of benchmarks, while achieving effective reductions in computation, memory consumption, and inference latency.
Problem

Research questions and friction points this paper is trying to address.

visual token pruning
large vision-language models
computational overhead
memory consumption
inference efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

subspace reconstruction
visual token pruning
text-conditioned
training-free
residual energy
X
Xu Li
College of Computer Science and Artificial Intelligence, Fudan University, Shanghai 200433, China
Y
Yi Zheng
College of Computer Science and Artificial Intelligence, Fudan University, Shanghai 200433, China
Yuxuan Liang
Yuxuan Liang
Assistant Professor, Hong Kong University of Science and Technology (Guangzhou)
Spatio-Temporal Data MiningUrban ComputingUrban AIFoundation ModelsTime Series
Z
Zhe Liu
College of Computer Science and Artificial Intelligence, Fudan University, Shanghai 200433, China
X
Xiaolei Chen
College of Computer Science and Artificial Intelligence, Fudan University, Shanghai 200433, China
Haotian Chen
Haotian Chen
University of California, Los Angeles
Political EconomyNon-market StrategyAmerican Politics
R
Rui Zhu
College of Computer Science and Artificial Intelligence, Fudan University, Shanghai 200433, China
Xiangyang Xue
Xiangyang Xue
Professor of Computer Science, Fudan University
Computer VisionPattern RecognitionMachine Learning