DyMU: Dynamic Merging and Virtual Unmerging for Efficient VLMs

📅 2025-04-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address computational redundancy in vision-language models (VLMs) caused by fixed-length visual token outputs from Vision Transformers, this paper proposes a training-free, dynamically adaptive acceleration framework. Methodologically, it introduces two novel components: (i) image-complexity-driven Dynamic Token Merging (DToMe), and (ii) Virtual Token Unmerging (VTU), jointly guided by attention dynamics modeling to enable content-aware token compression and fine-grained, zero-shot reconstruction. The framework requires no architectural modifications or additional training, is fully compatible with arbitrary VLMs and state-of-the-art vision encoders (e.g., AnyRes), and supports user-controllable trade-offs between computation cost and accuracy. Experiments demonstrate 32–85% reduction in visual tokens while maintaining performance parity with full-sequence baselines across diverse image and video understanding tasks, yielding substantial inference speedup without accuracy degradation.

Technology Category

Application Category

📝 Abstract
We present DyMU, an efficient, training-free framework that dynamically reduces the computational burden of vision-language models (VLMs) while maintaining high task performance. Our approach comprises two key components. First, Dynamic Token Merging (DToMe) reduces the number of visual token embeddings by merging similar tokens based on image complexity, addressing the inherent inefficiency of fixed-length outputs in vision transformers. Second, Virtual Token Unmerging (VTU) simulates the expected token sequence for large language models (LLMs) by efficiently reconstructing the attention dynamics of a full sequence, thus preserving the downstream performance without additional fine-tuning. Unlike previous approaches, our method dynamically adapts token compression to the content of the image and operates completely training-free, making it readily applicable to most state-of-the-art VLM architectures. Extensive experiments on image and video understanding tasks demonstrate that DyMU can reduce the average visual token count by 32%-85% while achieving comparable performance to full-length models across diverse VLM architectures, including the recently popularized AnyRes-based visual encoders. Furthermore, through qualitative analyses, we demonstrate that DToMe effectively adapts token reduction based on image complexity and, unlike existing systems, provides users more control over computational costs. Project page: https://mikewangwzhl.github.io/dymu/.
Problem

Research questions and friction points this paper is trying to address.

Reduces computational burden in vision-language models
Dynamically merges tokens based on image complexity
Preserves performance without additional fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic Token Merging reduces visual tokens
Virtual Token Unmerging simulates full attention
Training-free framework adapts to image content
🔎 Similar Papers
No similar papers found.