Script: Graph-Structured and Query-Conditioned Semantic Token Pruning for Multimodal Large Language Models

📅 2025-12-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high memory overhead and inference latency in multimodal large language models (MLLMs) caused by excessive visual tokens, this paper proposes a plug-and-play, fine-tuning-free visual token pruning method. Our approach innovatively integrates graph-structured modeling—capturing spatial and semantic relationships among tokens—with query-conditioned semantic importance scoring, thereby jointly suppressing redundancy and preserving task-relevant information. The method adaptively selects critical visual tokens without modifying model weights. Evaluated on 14 image and video understanding benchmarks, it significantly outperforms existing methods: for instance, on LLaVA-NeXT-7B, it achieves up to 6.8× prefill acceleration and 10× reduction in FLOPs while retaining 96.88% of the original performance. The code and models are publicly available.

Technology Category

Application Category

📝 Abstract
The rapid growth of visual tokens in multimodal large language models (MLLMs) leads to excessive memory consumption and inference latency, especially when handling high-resolution images and videos. Token pruning is a technique used to mitigate this issue by removing redundancy, but existing methods often ignore relevance to the user query or suffer from the limitations of attention mechanisms, reducing their adaptability and effectiveness. To address these challenges, we propose Script, a plug-and-play pruning method that requires no retraining and generalizes across diverse MLLMs. Script comprises two modules: a graph-structured pruning module that removes visually redundant tokens, and a query-conditioned semantic pruning module that preserves query-relevant visual information. Together, they enhance performance on multimodal tasks. Experiments on fourteen benchmarks across image and video understanding tasks show that Script consistently achieves higher model efficiency and predictive accuracy compared to existing pruning methods. On LLaVA-NeXT-7B, it achieves up to 6.8x prefill speedup and 10x FLOP reduction, while retaining 96.88% of the original performance.
Problem

Research questions and friction points this paper is trying to address.

Reduces visual token redundancy in multimodal models
Preserves query-relevant information during token pruning
Improves efficiency and accuracy without retraining models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph-structured pruning removes visually redundant tokens
Query-conditioned semantic pruning retains query-relevant visual information
Plug-and-play method generalizes across diverse MLLMs without retraining
🔎 Similar Papers
No similar papers found.