VideoZoomer: Reinforcement-Learned Temporal Focusing for Long Video Reasoning

📅 2025-12-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address keyframe omission and suboptimal static sampling caused by limited context windows in long-video understanding, this paper proposes a Dynamic Temporal Focusing framework. Starting from a low-frame-rate video overview, it autonomously locates and retrieves high-frame-rate critical segments via proxy-based “temporal scaling” across multiple interactive reasoning steps. We introduce a PPO-based reinforcement learning mechanism that enables dynamic visual focus adjustment and error correction during inference. A two-stage training paradigm is adopted: reflection trajectory distillation for supervised fine-tuning, followed by RL-based policy optimization. The resulting 7B model achieves state-of-the-art performance among open-source models on multiple long-video reasoning benchmarks—matching or approaching closed-source systems—while significantly improving inference efficiency and complex reasoning capability under strict frame-budget constraints.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) have achieved remarkable progress in vision-language tasks yet remain limited in long video understanding due to the limited context window. Consequently, prevailing approaches tend to rely on uniform frame sampling or static pre-selection, which might overlook critical evidence and unable to correct its initial selection error during its reasoning process. To overcome these limitations, we propose VideoZoomer, a novel agentic framework that enables MLLMs to dynamically control their visual focus during reasoning. Starting from a coarse low-frame-rate overview, VideoZoomer invokes a temporal zoom tool to obtain high-frame-rate clips at autonomously chosen moments, thereby progressively gathering fine-grained evidence in a multi-turn interactive manner. Accordingly, we adopt a two-stage training strategy: a cold-start supervised fine-tuning phase on a curated dataset of distilled exemplar and reflection trajectories, followed by reinforcement learning to further refine the agentic policy. Extensive experiments demonstrate that our 7B model delivers diverse and complex reasoning patterns, yielding strong performance across a broad set of long video understanding and reasoning benchmarks. These emergent capabilities allow it to consistently surpass existing open-source models and even rival proprietary systems on challenging tasks, while achieving superior efficiency under reduced frame budgets.
Problem

Research questions and friction points this paper is trying to address.

Enables dynamic visual focus in long video reasoning
Overcomes limitations of uniform frame sampling methods
Enhances multimodal models' fine-grained evidence gathering
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic temporal zoom tool for visual focus control
Two-stage training with supervised fine-tuning and reinforcement learning
Multi-turn interactive evidence gathering for long video reasoning
🔎 Similar Papers
No similar papers found.