Beyond Attention or Similarity: Maximizing Conditional Diversity for Token Pruning in MLLMs

πŸ“… 2025-06-12
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the high inference overhead caused by visual token redundancy in multimodal large language models (MLLMs), this paper proposes CDPrunerβ€”a training-free, model-agnostic visual token pruning method. Its core innovation lies in introducing **instruction-conditioned similarity** for the first time and modeling conditional diversity via Determinantal Point Processes (DPP), enabling instruction-aware, unsupervised visual token pruning and departing from conventional attention- or similarity-driven paradigms. Evaluated on mainstream MLLMs such as LLaVA, CDPruner achieves 95% FLOPs reduction and 78% CUDA latency decrease while retaining 94% of original task accuracy. Moreover, it significantly enhances robustness in vision-language understanding under high pruning ratios, establishing new state-of-the-art performance.

Technology Category

Application Category

πŸ“ Abstract
In multimodal large language models (MLLMs), the length of input visual tokens is often significantly greater than that of their textual counterparts, leading to a high inference cost. Many works aim to address this issue by removing redundant visual tokens. However, current approaches either rely on attention-based pruning, which retains numerous duplicate tokens, or use similarity-based pruning, overlooking the instruction relevance, consequently causing suboptimal performance. In this paper, we go beyond attention or similarity by proposing a novel visual token pruning method named CDPruner, which maximizes the conditional diversity of retained tokens. We first define the conditional similarity between visual tokens conditioned on the instruction, and then reformulate the token pruning problem with determinantal point process (DPP) to maximize the conditional diversity of the selected subset. The proposed CDPruner is training-free and model-agnostic, allowing easy application to various MLLMs. Extensive experiments across diverse MLLMs show that CDPruner establishes new state-of-the-art on various vision-language benchmarks. By maximizing conditional diversity through DPP, the selected subset better represents the input images while closely adhering to user instructions, thereby preserving strong performance even with high reduction ratios. When applied to LLaVA, CDPruner reduces FLOPs by 95% and CUDA latency by 78%, while maintaining 94% of the original accuracy. Our code is available at https://github.com/Theia-4869/CDPruner.
Problem

Research questions and friction points this paper is trying to address.

Reduces redundant visual tokens in MLLMs to lower inference costs
Improves token pruning by maximizing conditional diversity via DPP
Maintains performance while significantly cutting FLOPs and latency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Maximizes conditional diversity for token pruning
Uses determinantal point process (DPP) method
Training-free and model-agnostic pruning solution
πŸ”Ž Similar Papers
No similar papers found.
Qizhe Zhang
Qizhe Zhang
School of Computer Science, Peking University
Vision Language ModelComputer VisionMachine Learning
M
Mengzhen Liu
National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University
L
Lichen Li
National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University
M
Ming Lu
National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University
Y
Yuan Zhang
National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University
Junwen Pan
Junwen Pan
ByteDance
Deep LearningMachine LearningImage Segmentation
Q
Qi She
National Key Laboratory for Multimedia Information Processing, School of Computer Science, Peking University
Shanghang Zhang
Shanghang Zhang
Peking University
Embodied AIFoundation Models