Thinking with Drafts: Speculative Temporal Reasoning for Efficient Long Video Understanding

📅 2025-11-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Long-video understanding is hindered by two key bottlenecks: multimodal contextual redundancy and inefficient inter-frame reasoning. To address these, we propose a novel paradigm that decouples temporal perception from reasoning, implemented via a reinforcement learning–based dual-model collaborative framework: a lightweight draft multimodal large language model (MLLM) generates coarse-grained temporal proposals, while a powerful multimodal target model performs fine-grained verification—mimicking the brain’s parallel processing pathways. Our method integrates dense temporal sampling with a two-level annotation training strategy and introduces SpecTemp-80K, a new benchmark comprising 80K high-quality long-video samples. Extensive experiments demonstrate state-of-the-art performance across multiple benchmarks, alongside significant inference speedup. Results validate the effectiveness and scalability of the dynamic “proposal–verification” collaboration mechanism for efficient long-video understanding.

Technology Category

Application Category

📝 Abstract
Long video understanding is essential for human-like intelligence, enabling coherent perception and reasoning over extended temporal contexts. While the emerging thinking-with-frames paradigm, which alternates between global temporal reasoning and local frame examination, has advanced the reasoning capabilities of video multi-modal large language models (MLLMs), it suffers from a significant efficiency bottleneck due to the progressively growing and redundant multi-modal context. To address this, we propose SpecTemp, a reinforcement learning-based Speculative Temporal reasoning framework that decouples temporal perception from reasoning via a cooperative dual-model design. In SpecTemp, a lightweight draft MLLM rapidly explores and proposes salient frames from densely sampled temporal regions, while a powerful target MLLM focuses on temporal reasoning and verifies the draft's proposals, iteratively refining its attention until convergence. This design mirrors the collaborative pathways of the human brain, balancing efficiency with accuracy. To support training, we construct the SpecTemp-80K dataset, featuring synchronized dual-level annotations for coarse evidence spans and fine-grained frame-level evidence. Experiments across multiple video understanding benchmarks demonstrate that SpecTemp not only maintains competitive accuracy but also significantly accelerates inference compared with existing thinking-with-frames methods.
Problem

Research questions and friction points this paper is trying to address.

Improves efficiency of long video understanding models
Reduces redundant multimodal context in temporal reasoning
Accelerates inference while maintaining competitive accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning decouples perception from reasoning
Dual-model design drafts frames and verifies proposals
Iterative refinement balances efficiency with accuracy
🔎 Similar Papers
No similar papers found.