π€ AI Summary
To address the excessive inference overhead of multimodal large language models (MLLMs) on mobile devices caused by redundant visual tokens, this paper proposes LVPruningβa language-guided visual token pruning method. LVPruning requires no model parameter modification and dynamically assesses visual token importance using cross-modal attention, with language tokens serving as the importance criterion for adaptive pruning. It is the first approach to leverage linguistic signals as the primary metric for visual token importance, enabling efficient, lossless compression. When applied to LLaVA-1.5, LVPruning prunes 90% of visual tokens in intermediate layers, reducing inference TFLOPs by 62.1%, while degrading average performance across nine major multimodal benchmarks by only 0.45%. This substantial efficiency gain significantly enhances the deployability of MLLMs on resource-constrained edge devices.
π Abstract
Multi-modal Large Language Models (MLLMs) have achieved remarkable success by integrating visual and textual modalities. However, they incur significant computational overhead due to the large number of vision tokens processed, limiting their practicality in resource-constrained environments. We introduce Language-Guided Vision Token Pruning (LVPruning) for MLLMs, an effective yet simple method that significantly reduces the computational burden while preserving model performance. LVPruning employs cross-attention modules to compute the importance of vision tokens based on their interaction with language tokens, determining which to prune. Importantly, LVPruning can be integrated without modifying the original MLLM parameters, which makes LVPruning simple to apply or remove. Our experiments show that LVPruning can effectively reduce up to 90% of vision tokens by the middle layer of LLaVA-1.5, resulting in a 62.1% decrease in inference Tera Floating-Point Operations Per Second (TFLOPs), with an average performance loss of just 0.45% across nine multi-modal benchmarks.