🤖 AI Summary
This work addresses the challenge of token compression in hybrid video vision-language models that integrate attention and linear-time state space modules like Mamba, particularly in long videos where existing methods fail to model the dynamic shifts in token importance across layers. The authors propose a layer-wise progressive token compression strategy coupled with a unified language-aware scoring mechanism, enabling efficient token reduction throughout both attention and Mamba blocks. Key innovations include the first empirical characterization of inter-layer sparsity and stability of token importance in such hybrid architectures, a low-to-high progressive compression schedule, and an implicit attention proxy scorer tailored for Mamba. With only 25% of visual tokens retained, the method achieves 3.8–4.2× prefill acceleration while maintaining near-baseline accuracy, and further improves performance on long-video benchmarks after lightweight fine-tuning.
📝 Abstract
Token reduction is an effective way to accelerate long-video vision-language models (VLMs), but most existing methods are designed for dense Transformers and do not directly account for hybrid architectures that interleave attention with linear-time state-space blocks (e.g., Mamba). We study query-conditioned token reduction for hybrid video VLMs and analyze reduction behavior through two properties: layerwise sparsity (how many tokens capture query-relevant information) and importance stability (whether token-importance rankings persist across depth). Although token importance is sparse within each layer, the set of important tokens changes across layers, so aggressive early pruning is unreliable. Motivated by this, we propose a low-to-high progressive reduction schedule and a unified language-aware scoring mechanism for both attention and Mamba blocks (using an implicit-attention proxy for Mamba), enabling all-layer token reduction in hybrids. Under an aggressive compression setting (retaining 25% of visual tokens), our approach delivers substantial prefilling speedups (3.8--4.2x) with near-baseline accuracy at test time, and light finetuning under reduction further improves performance on long-context video benchmarks.