🤖 AI Summary
To address the high computational overhead in multimodal large language models (MM-LLMs) caused by concatenating visual and textual tokens, this paper proposes a dynamic two-stage token pruning method. In the first stage, leveraging the long-tailed distribution of CLS token similarities, we introduce a novel inflection-point-driven dynamic visual token pruning strategy. In the second stage, cross-modal correlation modeling is employed to guide adaptive, layer-wise textual token sparsification within the LLM. The method reduces total token count to 22% of the original while preserving model accuracy, yielding substantial inference speedup. Our core contributions lie in the joint design of inflection-point identification in long-tailed similarity distributions and cross-modal collaborative pruning, achieving an optimal trade-off between computational efficiency and task performance.
📝 Abstract
Recently, multimodal large language models (MM-LLMs) have achieved significant success in various tasks, but their high computational costs limit widespread application. The main computational burden arises from processing concatenated text and visual tokens in the LLM layer, where input token length directly affects efficiency. Our analysis of visual tokens reveals that their similarity to the CLS token follows a long-tail distribution, with only a few showing high similarity. To address this, we propose a dynamic pruning algorithm that identifies the inflection point in the visual CLS token similarity curve, enabling effective trimming of visual markers to accelerate model performance. Additionally, we perform a second round of pruning in the LLM layer, filtering out low-correlation tokens through the interaction between visual and textual features. Experimental results demonstrate that our method achieves performance comparable to the original while utilizing only 22% of the original token quantity. Our source code will be made publicly available upon acceptance.