🤖 AI Summary
This work addresses the high computational cost of multimodal large language models arising from processing excessive visual tokens. Existing pruning methods rely on heuristic, static layer selection, lacking interpretability and cross-model generalization. To overcome these limitations, the authors propose a dynamic visual token pruning framework grounded in matrix entropy. Their approach introduces, for the first time, the concept of an “entropy collapse layer”—identified via matrix entropy—as the point where information content drops sharply. Leveraging the spectral equivalence of dual Gram matrices, the method efficiently quantifies token informativeness without relying on attention maps. Evaluated on LLaVA-1.5-7B, it reduces FLOPs by 68.2% while retaining 96.0% of the original performance, significantly outperforming existing techniques. Moreover, the framework demonstrates strong generalization across high-resolution and video-based multimodal models.
📝 Abstract
Multimodal large language models (MLLMs) incur substantial inference cost due to the processing of hundreds of visual tokens per image. Although token pruning has proven effective for accelerating inference, determining when and where to prune remains largely heuristic. Existing approaches typically rely on static, empirically selected layers, which limit interpretability and transferability across models. In this work, we introduce a matrix-entropy perspective and identify an "Entropy Collapse Layer" (ECL), where the information content of visual representations exhibits a sharp and consistent drop, which provides a principled criterion for selecting the pruning stage. Building on this observation, we propose EntropyPrune, a novel matrix-entropy-guided token pruning framework that quantifies the information value of individual visual tokens and prunes redundant ones without relying on attention maps. Moreover, to enable efficient computation, we exploit the spectral equivalence of dual Gram matrices, reducing the complexity of entropy computation and yielding up to a 64x theoretical speedup. Extensive experiments on diverse multimodal benchmarks demonstrate that EntropyPrune consistently outperforms state-of-the-art pruning methods in both accuracy and efficiency. On LLaVA-1.5-7B, our method achieves a 68.2% reduction in FLOPs while preserving 96.0% of the original performance. Furthermore, EntropyPrune generalizes effectively to high-resolution and video-based models, highlighting the strong robustness and scalability in practical MLLM acceleration. The code will be publicly available at https://github.com/YahongWang1/EntropyPrune.