🤖 AI Summary
Visual language models (VLMs) suffer from excessive visual token redundancy, leading to high computational overhead and low inference efficiency; existing token pruning methods lack theoretical error guarantees and plug-and-play compatibility. This paper proposes the first **provably error-bounded** visual token pruning method: leveraging object-centric representations, it directly selects the most discriminative tokens by minimizing reconstruction error. The approach requires only lightweight pretraining—no fine-tuning—and integrates seamlessly into mainstream VLMs. Across arbitrary pruning ratios, it consistently outperforms state-of-the-art methods, achieving superior accuracy while reducing GPU memory consumption and latency. Moreover, the retained tokens exhibit strong interpretability, aligning with semantically meaningful image regions. The implementation is open-sourced and designed for immediate, plug-and-play deployment.
📝 Abstract
In Vision Language Models (VLMs), vision tokens are quantity-heavy yet information-dispersed compared with language tokens, thus consume too much unnecessary computation. Pruning redundant vision tokens for high VLM inference efficiency has been continuously studied but all existing methods resort to indirect and non-guaranteed ways. We propose OC-VTP, a direct and guaranteed approach to select the most representative vision tokens for high-efficiency yet accuracy-preserving VLM inference. Our OC-VTP requires merely light-weight pre-training of a small object-centric vision token pruner, which can then be inserted into existing VLMs, without fine-tuning of any models on any datasets. It is gauranteed that the most representative vision tokens are kept by minimizing the error in reconstructing the original unpruned tokens from the selected ones. Across any vision pruning ratios, i.e., inference efficiency, our OC-VTP consistently helps mainstream VLMs to preserve the highest inference accuracy. Our pruning also demonstrates interesting interpretability. Our codes are available at https://github.com/GarryLarry010131/OC-VTP.