VisionSelector: End-to-End Learnable Visual Token Compression for Efficient Multimodal LLMs

📅 2025-10-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the computational and memory bottlenecks caused by visual token explosion in multimodal large language models (MLLMs) when processing high-resolution or multi-image inputs, this paper proposes an end-to-end learnable visual token compression framework. Unlike existing methods relying on heuristic rules or suffering from attention sink bias, our approach formulates compression as a lightweight, plug-and-play, differentiable Top-K selection process. We introduce a decoupled scorer module and a curriculum annealing strategy to bridge the training-inference gap. With only 12.85M parameters, the framework supports arbitrary compression ratios and is compatible with mainstream MLLM architectures. Experiments demonstrate that at 30% token retention rate, MME accuracy remains at 100%; at 10% retention, it outperforms the state-of-the-art by 12.14% while doubling prefill throughput.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) encounter significant computational and memory bottlenecks from the massive number of visual tokens generated by high-resolution images or multi-image inputs. Previous token compression techniques are often constrained by heuristic rules that risk discarding critical information. They may suffer from biases, such as attention sinks, that lead to sharp performance drops under aggressive compression ratios. To address these limitations, we reformulate token compression as a lightweight plug-and-play framework that reformulates token compression into an end-to-end learnable decision process. To be specific, we propose VisionSelector, a scorer module decoupled from the MLLM backbone that incorporates a differentiable Top-K mechanism and a curriculum annealing strategy to bridge the training-inference gap, enabling efficient and adaptive token selection various arbitrary compression rates. Remarkably lightweight with only 12.85M trainable parameters, VisionSelector demonstrates generalization across various compression rates and adaptively identifying critical tokens. This leads to superior performance across all compression budgets, evidenced by preserving 100% accuracy on MME with 30% retention budget, outperforming prior methods by 12.14% at 10% retention budget, and doubling prefill speed. Our code is available at https://github.com/JulietChoo/VisionSelector .
Problem

Research questions and friction points this paper is trying to address.

Reduces computational bottlenecks from excessive visual tokens in MLLMs
Replaces heuristic compression with end-to-end learnable token selection
Enables adaptive token compression without critical information loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

End-to-end learnable framework for token compression
Differentiable Top-K mechanism with curriculum annealing
Lightweight plug-and-play module decoupled from MLLM backbone
🔎 Similar Papers
No similar papers found.
J
Jiaying Zhu
University of Science and Technology of China
Yurui Zhu
Yurui Zhu
University of Science and Technology of China
X
Xin Lu
University of Science and Technology of China
W
Wenrui Yan
ZTE Corporation
D
Dong Li
University of Science and Technology of China
K
Kunlin Liu
ZTE Corporation
X
Xueyang Fu
University of Science and Technology of China
Z
Zheng-Jun Zha
University of Science and Technology of China