🤖 AI Summary
Visual-language models (VLMs) suffer from high inference overhead due to excessive visual tokens, hindering mobile deployment. Existing pruning methods either rely solely on token importance while ignoring redundancy, or neglect spatial structure—resulting in sparse, discontinuous token retention and incomplete target coverage. This paper proposes a training-free, efficient token pruning framework. We introduce an *eccentric pruning paradigm* coupled with a *spatially sparse buffering criterion* to eliminate redundancy while preserving spatial continuity of target regions. Further, we integrate *importance-based parallel greedy selection* with a *salient-information fusion mechanism* for discarded tokens, jointly ensuring fine-grained central semantic fidelity and global contextual integrity. Evaluated on five mainstream VLMs, our method achieves an 88.9% token pruning rate—significantly outperforming state-of-the-art baselines—and delivers end-to-end inference acceleration.
📝 Abstract
Vision-language models (VLMs) excel at image understanding tasks, but the large number of visual tokens imposes significant computational costs, hindering deployment on mobile devices. Many pruning methods rely solely on token importance and thus overlook inter-token redundancy, retaining numerous duplicated tokens and wasting capacity. Although some redundancy-aware approaches have been proposed, they often ignore the spatial relationships among visual tokens. This can lead to overly sparse selections of retained tokens that fail to adequately cover the regions of target objects. To address these limitations, we propose VLM-Pruner, a training-free token pruning algorithm that explicitly balances redundancy and spatial sparsity. We introduce a centrifugal token pruning paradigm that enables near-to-far selection while prioritizing the preservation of fine-grained object details. Moreover, we design a Buffering for Spatial Sparsity (BSS) criterion that defers the selection of spatially distant tokens. We further adopt a parallel greedy strategy to conduct token selection efficiently. To mitigate information loss from pruning, we selectively fuse salient information from the discarded tokens into the retained ones. Comprehensive comparisons demonstrate that VLM-Pruner consistently outperforms strong baselines across five VLMs with an 88.9% pruning rate, while delivering an end-to-end inference speedup.