🤖 AI Summary
Multimodal large language models (MLLMs) suffer from prohibitive computational overhead due to the vast number of visual tokens generated by Vision Transformer (ViT) encoders. Existing token pruning methods exhibit two critical limitations: LLM-side pruning neglects ViT computation, while ViT-side pruning lacks language guidance—risking removal of semantically critical visual cues—and suffers from feature distortion amplified by bidirectional attention. This work proposes the first training-free, semantics-aware ViT internal compression framework. It introduces two novel mechanisms: Neighbor-Guided Reconstruction (NGR), which dynamically reconstructs tokens actively involved in attention, and Attention Stabilization (AS), which preserves fidelity of key-value representations. Together, they ensure language-aligned visual information integrity under aggressive pruning. Evaluated across diverse image and video benchmarks, our method significantly outperforms existing training-free approaches and seamlessly integrates with mainstream LLM pruning strategies.
📝 Abstract
Multimodal Large Language Models (MLLMs) deliver strong vision-language performance but at high computational cost, driven by numerous visual tokens processed by the Vision Transformer (ViT) encoder. Existing token pruning strategies are inadequate: LLM-stage token pruning overlooks the ViT's overhead, while conventional ViT token pruning, without language guidance, risks discarding textually critical visual cues and introduces feature distortions amplified by the ViT's bidirectional attention. To meet these challenges, we propose IPCV, a training-free, information-preserving compression framework for MLLM visual encoders. IPCV enables aggressive token pruning inside the ViT via Neighbor-Guided Reconstruction (NGR) that temporarily reconstructs pruned tokens to participate in attention with minimal overhead, then fully restores them before passing to the LLM. Besides, we introduce Attention Stabilization (AS) to further alleviate the negative influence from token pruning by approximating the K/V of pruned tokens. It can be directly applied to previous LLM-side token pruning methods to enhance their performance. Extensive experiments show that IPCV substantially reduces end-to-end computation and outperforms state-of-the-art training-free token compression methods across diverse image and video benchmarks. Our code is available at https://github.com/Perkzi/IPCV.