Video-o3: Native Interleaved Clue Seeking for Long Video Multi-Hop Reasoning

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal large models for long-form video struggle to locate sparse, critical evidence amid redundant content due to their reliance on uniform frame sampling and single-pass reasoning. This work proposes Video-o3, a novel framework that iteratively discovers salient visual cues, performs fine-grained verification of key segments, and adaptively terminates reasoning when sufficient evidence is gathered. The approach introduces three key innovations: a native interleaved tool-calling mechanism, task-decoupled attention masks, and verifiable trajectory-guided rewards. Trained on large-scale synthetic data via a joint supervised and reinforcement learning strategy, Video-o3 significantly enhances both efficiency and accuracy in multi-hop reasoning. It achieves state-of-the-art results with 72.1% accuracy on MLVU and 46.5% on Video-Holmes, substantially outperforming existing methods.

Technology Category

Application Category

📝 Abstract
Existing multimodal large language models for long-video understanding predominantly rely on uniform sampling and single-turn inference, limiting their ability to identify sparse yet critical evidence amid extensive redundancy. We introduce Video-o3, a novel framework that supports iterative discovery of salient visual clues, fine-grained inspection of key segments, and adaptive termination once sufficient evidence is acquired. Technically, we address two core challenges in interleaved tool invocation. First, to mitigate attention dispersion induced by the heterogeneity of reasoning and tool-calling, we propose Task-Decoupled Attention Masking, which isolates per-step concentration while preserving shared global context. Second, to control context length growth in multi-turn interactions, we introduce a Verifiable Trajectory-Guided Reward that balances exploration coverage with reasoning efficiency. To support training at scale, we further develop a data synthesis pipeline and construct Seeker-173K, comprising 173K high-quality tool-interaction trajectories for effective supervised and reinforcement learning. Extensive experiments show that Video-o3 substantially outperforms state-of-the-art methods, achieving 72.1% accuracy on MLVU and 46.5% on Video-Holmes. These results demonstrate Video-o3's strong multi-hop evidence-seeking and reasoning capabilities, and validate the effectiveness of native tool invocation in long-video scenarios.
Problem

Research questions and friction points this paper is trying to address.

long-video understanding
multi-hop reasoning
sparse evidence
multimodal large language models
evidence seeking
Innovation

Methods, ideas, or system contributions that make the work stand out.

interleaved tool invocation
Task-Decoupled Attention Masking
Verifiable Trajectory-Guided Reward
multi-hop video reasoning
long-video understanding