Dynamic Token Compression for Efficient Video Understanding through Reinforcement Learning

📅 2026-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of multimodal large language models in video understanding, where high computational costs and visual token redundancy lead to context degradation, and existing compression methods struggle to adapt to downstream tasks. To overcome these challenges, the authors propose SCORE, a framework that employs a lightweight policy network trained via reinforcement learning. SCORE leverages a “surprise-augmented” state representation incorporating inter-frame residuals to capture temporal dynamics and motion saliency. It further integrates grouped advantage estimation and a two-stage curriculum learning strategy—progressing from static pseudo-videos to real dynamic videos—to enable task-adaptive dynamic visual token compression. Experiments demonstrate that SCORE retains 99.5% of original performance using only 10% of visual tokens, significantly outperforming existing methods across multiple video understanding benchmarks while achieving a 16× speedup in prefilling latency.
📝 Abstract
Multimodal Large Language Models have demonstrated remarkable capabilities in video understanding, yet face prohibitive computational costs and performance degradation from ''context rot'' due to massive visual token redundancy. Existing compression strategies typically rely on heuristics or fixed transformations that are often decoupled from the downstream task objectives, limiting their adaptability and effectiveness. To address this, we propose SCORE (Surprise-augmented token COmpression via REinforcement learning), a unified framework that learns an adaptive token compression policy. SCORE introduces a lightweight policy network conditioned on a surprise-augmented state representation that incorporates inter-frame residuals to explicitly capture temporal dynamics and motion saliency. We optimize this policy using a group-wise reinforcement learning scheme with a split-advantage estimator, stabilized by a two-stage curriculum transferring from static pseudo-videos to real dynamic videos. Extensive experiments on diverse video understanding benchmarks demonstrate that SCORE significantly outperforms state-of-the-art baselines. Notably, SCORE achieves a 16x prefill speedup while preserving 99.5% of original performance at a 10% retention ratio, offering a scalable solution for efficient long-form video understanding.
Problem

Research questions and friction points this paper is trying to address.

video understanding
token compression
computational efficiency
context rot
multimodal large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic Token Compression
Reinforcement Learning
Video Understanding
Multimodal LLMs
Temporal Dynamics
🔎 Similar Papers
No similar papers found.
Shida Wang
Shida Wang
National University of Singapore
Sequence ModellingLarge Language Model
Y
YongXiang Hua
University of Science and Technology of China, State Key Laboratory of Cognitive Intelligence
Z
Zhou Tao
University of Science and Technology of China, State Key Laboratory of Cognitive Intelligence
H
Haoyu Cao
University of Science and Technology of China, State Key Laboratory of Cognitive Intelligence
Linli Xu
Linli Xu
University of Science and Technology of China