VLM-Pruner: Buffering for Spatial Sparsity in an Efficient VLM Centrifugal Token Pruning Paradigm

📅 2025-12-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Visual-language models (VLMs) suffer from high inference overhead due to excessive visual tokens, hindering mobile deployment. Existing pruning methods either rely solely on token importance while ignoring redundancy, or neglect spatial structure—resulting in sparse, discontinuous token retention and incomplete target coverage. This paper proposes a training-free, efficient token pruning framework. We introduce an *eccentric pruning paradigm* coupled with a *spatially sparse buffering criterion* to eliminate redundancy while preserving spatial continuity of target regions. Further, we integrate *importance-based parallel greedy selection* with a *salient-information fusion mechanism* for discarded tokens, jointly ensuring fine-grained central semantic fidelity and global contextual integrity. Evaluated on five mainstream VLMs, our method achieves an 88.9% token pruning rate—significantly outperforming state-of-the-art baselines—and delivers end-to-end inference acceleration.

Technology Category

Application Category

📝 Abstract
Vision-language models (VLMs) excel at image understanding tasks, but the large number of visual tokens imposes significant computational costs, hindering deployment on mobile devices. Many pruning methods rely solely on token importance and thus overlook inter-token redundancy, retaining numerous duplicated tokens and wasting capacity. Although some redundancy-aware approaches have been proposed, they often ignore the spatial relationships among visual tokens. This can lead to overly sparse selections of retained tokens that fail to adequately cover the regions of target objects. To address these limitations, we propose VLM-Pruner, a training-free token pruning algorithm that explicitly balances redundancy and spatial sparsity. We introduce a centrifugal token pruning paradigm that enables near-to-far selection while prioritizing the preservation of fine-grained object details. Moreover, we design a Buffering for Spatial Sparsity (BSS) criterion that defers the selection of spatially distant tokens. We further adopt a parallel greedy strategy to conduct token selection efficiently. To mitigate information loss from pruning, we selectively fuse salient information from the discarded tokens into the retained ones. Comprehensive comparisons demonstrate that VLM-Pruner consistently outperforms strong baselines across five VLMs with an 88.9% pruning rate, while delivering an end-to-end inference speedup.
Problem

Research questions and friction points this paper is trying to address.

Reduces computational cost of vision-language models for mobile deployment
Addresses redundancy and spatial sparsity in token pruning methods
Preserves fine-grained object details while achieving high pruning rates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Training-free centrifugal pruning balances redundancy and spatial sparsity
Buffering for Spatial Sparsity defers selection of distant tokens
Selective fusion of salient information from discarded tokens mitigates loss
🔎 Similar Papers
No similar papers found.
Z
Zhenkai Wu
Zhejiang University, Huawei Noah’s Ark Lab
Xiaowen Ma
Xiaowen Ma
Zhejiang University, Huawei Noah's Ark Lab
Computer VisionRemote SensingMulti-modalTime Series
Z
Zhenliang Ni
Huawei Noah’s Ark Lab
D
Dengming Zhang
Zhejiang University, Huawei Noah’s Ark Lab
Han Shu
Han Shu
Huawei Noah's Ark Lab
X
Xin Jiang
Huawei Noah’s Ark Lab
X
Xinghao Chen
Huawei Noah’s Ark Lab