🤖 AI Summary
Long video inputs cause quadratic growth in visual token count for multimodal large language models (LMMs), severely degrading inference efficiency due to escalating computational overhead. To address this, we propose a dual cross-attention–driven video token compression framework. First, a pretrained vision encoder performs pooling-based dimensionality reduction. Second, we introduce coupled visual–visual and text–visual cross-attention modules to jointly model temporal redundancy and cross-modal semantic alignment. Third, multi-stage token reweighting and fusion preserve fine-grained details while enhancing informative content. Our method achieves state-of-the-art or competitive performance across multiple video understanding benchmarks, reduces visual tokens by over 60%, and significantly lowers GPU memory consumption and FLOPs. Notably, it is the first end-to-end trainable, high-fidelity, and interpretable video token compression approach that maintains semantic integrity without sacrificing accuracy.
📝 Abstract
The advent of Large Multimodal Models (LMMs) has significantly enhanced Large Language Models (LLMs) to process and interpret diverse data modalities (e.g., image and video). However, as input complexity increases, particularly with long video sequences, the number of required tokens has grown significantly, leading to quadratically computational costs. This has made the efficient compression of video tokens in LMMs, while maintaining performance integrity, a pressing research challenge. In this paper, we introduce CrossLMM, decoupling long video sequences from LMMs via a dual cross-attention mechanism, which substantially reduces visual token quantity with minimal performance degradation. Specifically, we first implement a significant token reduction from pretrained visual encoders through a pooling methodology. Then, within LLM layers, we employ a visual-to-visual cross-attention mechanism, wherein the pooled visual tokens function as queries against the original visual token set. This module enables more efficient token utilization while retaining fine-grained informational fidelity. In addition, we introduce a text-to-visual cross-attention mechanism, for which the text tokens are enhanced through interaction with the original visual tokens, enriching the visual comprehension of the text tokens. Comprehensive empirical evaluation demonstrates that our approach achieves comparable or superior performance across diverse video-based LMM benchmarks, despite utilizing substantially fewer computational resources.