ViTCoP: Accelerating Large Vision-Language Models via Visual and Textual Semantic Collaborative Pruning

📅 2026-01-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational cost of large vision-language models caused by redundant visual tokens, a challenge exacerbated by existing pruning methods that often discard critical information. To this end, we propose ViTCoP, a vision-text semantic collaborative pruning framework that filters redundant tokens in the vision encoder while leveraging the hierarchical structure of large language models to perform progressive co-pruning, thereby preserving informative and diverse visual tokens. We introduce, for the first time, the K-vector L2 norm as a token saliency metric, which is compatible with FlashAttention for efficient inference. Experiments demonstrate that ViTCoP achieves state-of-the-art performance on both image and video understanding tasks, significantly reducing inference latency and GPU memory consumption—particularly under extreme pruning ratios.

Technology Category

Application Category

📝 Abstract
Large Vision-Language Models (LVLMs) incur high computational costs due to significant redundancy in their visual tokens. To effectively reduce this cost, researchers have proposed various visual token pruning methods. However, existing methods are generally limited, either losing critical visual information prematurely due to pruning in the vision encoder, or leading to information redundancy among the selected tokens due to pruning in the Large Language Models (LLMs). To address these challenges, we propose a Visual and Textual Semantic Collaborative Pruning framework (ViTCoP) that combines redundancy filtering in the vision encoder with step-wise co-pruning within the LLM based on its hierarchical characteristics, to efficiently preserve critical and informationally diverse visual tokens. Meanwhile, to ensure compatibility with acceleration techniques like FlashAttention, we introduce the L2 norm of K-vectors as the token saliency metric in the LLM. Extensive experiments on various Large Vision-Language Models demonstrate that ViTCoP not only achieves state-of-the-art performance surpassing existing methods on both image and video understanding tasks, but also significantly reduces model inference latency and GPU memory consumption. Notably, its performance advantage over other methods becomes even more pronounced under extreme pruning rates.
Problem

Research questions and friction points this paper is trying to address.

Large Vision-Language Models
visual token pruning
computational redundancy
information preservation
model acceleration
Innovation

Methods, ideas, or system contributions that make the work stand out.

visual-textual collaborative pruning
token redundancy reduction
hierarchical co-pruning
L2 norm saliency
large vision-language models
Wen Luo
Wen Luo
Peking University
P
Peng Chen
School of Software Engineering, Huazhong University of Science and Technology, WuHan, China
X
Xiaotao Huang
School of Software Engineering, Huazhong University of Science and Technology, WuHan, China
L
Liqun Huang
School of Software Engineering, Huazhong University of Science and Technology, WuHan, China