🤖 AI Summary
Existing long-video understanding models suffer from prohibitive memory and computational overhead, struggling to balance performance and efficiency. To address this, we propose a task-aware KV sparsification framework integrating chunked prefilling with a dual-level KV decoding mechanism: intra-chunk full attention preserves fine-grained temporal modeling, while inter-chunk sparse attention dynamically selects task-relevant key-value pairs based on semantic relevance, enabling efficient KV cache compression. Our method substantially reduces computational cost for long-sequence processing. It achieves state-of-the-art performance among open-source lightweight multimodal large language models (MLLMs) across multiple long-video understanding benchmarks. On a single A100 GPU, it enables real-time inference on videos exceeding 10,000 frames—processing thousands of frames in just seconds—marking the first approach to jointly achieve high accuracy and high efficiency at the ten-thousand-frame scale.
📝 Abstract
Multi-modal large language models (MLLMs) models have made significant progress in video understanding over the past few years. However, processing long video inputs remains a major challenge due to high memory and computational costs. This makes it difficult for current models to achieve both strong performance and high efficiency in long video understanding. To address this challenge, we propose Video-XL-2, a novel MLLM that delivers superior cost-effectiveness for long-video understanding based on task-aware KV sparsification. The proposed framework operates with two key steps: chunk-based pre-filling and bi-level key-value decoding. Chunk-based pre-filling divides the visual token sequence into chunks, applying full attention within each chunk and sparse attention across chunks. This significantly reduces computational and memory overhead. During decoding, bi-level key-value decoding selectively reloads either dense or sparse key-values for each chunk based on its relevance to the task. This approach further improves memory efficiency and enhances the model's ability to capture fine-grained information. Video-XL-2 achieves state-of-the-art performance on various long video understanding benchmarks, outperforming existing open-source lightweight models. It also demonstrates exceptional efficiency, capable of processing over 10,000 frames on a single NVIDIA A100 (80GB) GPU and thousands of frames in just a few seconds.