Adaptive-VoCo: Complexity-Aware Visual Token Compression for Vision-Language Models

📅 2025-12-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the prohibitive computational and memory overheads induced by high-dimensional visual features in large-scale vision-language models, this paper proposes a complexity-adaptive visual token compression framework. Unlike fixed-rate compression methods such as VoCo-LLaMA, our approach introduces a lightweight complexity predictor that jointly models patch-wise entropy and attention map variance to dynamically quantify image visual complexity. We further design a joint loss function incorporating rate regularization and complexity alignment to enable end-to-end optimization. Integrated into the VoCo-LLaMA architecture, our method achieves dynamic token compression and multi-task co-optimization. Experiments across multiple multimodal understanding benchmarks demonstrate that our method significantly outperforms fixed-rate baselines—maintaining strong cross-modal representation capability while improving inference efficiency, robustness, and generalization.

Technology Category

Application Category

📝 Abstract
In recent years, large-scale vision-language models (VLMs) have demonstrated remarkable performance on multimodal understanding and reasoning tasks. However, handling high-dimensional visual features often incurs substantial computational and memory costs. VoCo-LLaMA alleviates this issue by compressing visual patch tokens into a few VoCo tokens, reducing computational overhead while preserving strong cross-modal alignment. Nevertheless, such approaches typically adopt a fixed compression rate, limiting their ability to adapt to varying levels of visual complexity. To address this limitation, we propose Adaptive-VoCo, a framework that augments VoCo-LLaMA with a lightweight predictor for adaptive compression. This predictor dynamically selects an optimal compression rate by quantifying an image's visual complexity using statistical cues from the vision encoder, such as patch token entropy and attention map variance. Furthermore, we introduce a joint loss function that integrates rate regularization with complexity alignment. This enables the model to balance inference efficiency with representational capacity, particularly in challenging scenarios. Experimental results show that our method consistently outperforms fixed-rate baselines across multiple multimodal tasks, highlighting the potential of adaptive visual compression for creating more efficient and robust VLMs.
Problem

Research questions and friction points this paper is trying to address.

Adaptive compression for varying visual complexity
Balancing inference efficiency with representational capacity
Reducing computational costs in vision-language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive compression rate selection using visual complexity
Lightweight predictor with statistical cues for dynamic compression
Joint loss balancing efficiency and representational capacity
🔎 Similar Papers
No similar papers found.