EVTP-IVS: Effective Visual Token Pruning For Unifying Instruction Visual Segmentation In Multi-Modal Large Language Models

📅 2025-08-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high inference overhead of multimodal large language models (MLLMs) in instruction-based visual segmentation (IVS)—particularly prohibitive in video scenarios—this paper proposes a vision token pruning method grounded in *k*-center clustering and spatial information modeling. The approach jointly leverages positional priors and information entropy to ensure theoretical guarantees on both spatial coverage and semantic completeness, enabling efficient token selection within the vision encoder. On standard IVS benchmarks, retaining only 20% of vision tokens preserves original segmentation accuracy, while achieving 3.5× and 5× inference speedups for images and videos, respectively—substantially outperforming existing pruning techniques. The core contribution lies in being the first to integrate geometric clustering with structured spatial modeling for MLLM vision token compression, thereby achieving a principled balance between computational efficiency and task fidelity.

Technology Category

Application Category

📝 Abstract
Instructed Visual Segmentation (IVS) tasks require segmenting objects in images or videos based on natural language instructions. While recent multimodal large language models (MLLMs) have achieved strong performance on IVS, their inference cost remains a major bottleneck, particularly in video. We empirically analyze visual token sampling in MLLMs and observe a strong correlation between subset token coverage and segmentation performance. This motivates our design of a simple and effective token pruning method that selects a compact yet spatially representative subset of tokens to accelerate inference. In this paper, we introduce a novel visual token pruning method for IVS, called EVTP-IV, which builds upon the k-center by integrating spatial information to ensure better coverage. We further provide an information-theoretic analysis to support our design. Experiments on standard IVS benchmarks show that our method achieves up to 5X speed-up on video tasks and 3.5X on image tasks, while maintaining comparable accuracy using only 20% of the tokens. Our method also consistently outperforms state-of-the-art pruning baselines under varying pruning ratios.
Problem

Research questions and friction points this paper is trying to address.

Reducing visual token redundancy for faster MLLM inference
Maintaining segmentation accuracy with sparse token sampling
Unifying efficient instruction visual segmentation across images and videos
Innovation

Methods, ideas, or system contributions that make the work stand out.

Selects compact spatially representative token subset
Integrates spatial information into k-center algorithm
Achieves significant speed-up while maintaining accuracy
🔎 Similar Papers
No similar papers found.