Towards Lossless Ultimate Vision Token Compression for VLMs

📅 2025-12-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the excessive computational overhead and high inference latency of vision-language models (VLMs) on high-resolution image/video inputs—caused by redundant visual tokens—this paper proposes an end-to-end lossless visual token compression framework. Departing from conventional attention- or similarity-based compression paradigms, our method introduces a novel spatially orthogonal iterative merging mechanism and an attention-agnostic spectral pruning unit, enabling position-invariant, cross-layer compatible, and progressive token fusion and elimination. The framework requires no training for deployment, is fully compatible with FlashAttention, and achieves zero residual tokens for the first time at the deepest LLM layer. Experiments demonstrate a 2× speedup in inference latency with negligible accuracy degradation. Moreover, the method is plug-and-play across diverse VLM architectures without fine-tuning or retraining.

Technology Category

Application Category

📝 Abstract
Visual language models encounter challenges in computational efficiency and latency, primarily due to the substantial redundancy in the token representations of high-resolution images and videos. Current attention/similarity-based compression algorithms suffer from either position bias or class imbalance, leading to significant accuracy degradation. They also fail to generalize to shallow LLM layers, which exhibit weaker cross-modal interactions. To address this, we extend token compression to the visual encoder through an effective iterative merging scheme that is orthogonal in spatial axes to accelerate the computation across the entire VLM. Furthermoer, we integrate a spectrum pruning unit into LLM through an attention/similarity-free low-pass filter, which gradually prunes redundant visual tokens and is fully compatible to modern FlashAttention. On this basis, we propose Lossless Ultimate Vision tokens Compression (LUVC) framework. LUVC systematically compresses visual tokens until complete elimination at the final layer of LLM, so that the high-dimensional visual features are gradually fused into the multimodal queries. The experiments show that LUVC achieves a 2 speedup inference in language model with negligible accuracy degradation, and the training-free characteristic enables immediate deployment across multiple VLMs.
Problem

Research questions and friction points this paper is trying to address.

Compress visual tokens to reduce computational cost in VLMs
Address position bias and class imbalance in compression algorithms
Generalize token compression to shallow LLM layers effectively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Iterative merging scheme in visual encoder
Attention-free low-pass filter for token pruning
Systematic compression until final LLM layer
🔎 Similar Papers
No similar papers found.
D
Dehua Zheng
Huawei Noah’s Ark Lab
M
Mouxiao Huang
Huawei Noah’s Ark Lab
B
Borui Jiang
Huawei Noah’s Ark Lab
Hailin Hu
Hailin Hu
Huawei Noah's Ark Lab
X
Xinghao Chen
Huawei Noah’s Ark Lab