Temporal Chain of Thought: Long-Video Understanding by Thinking in Frames

📅 2025-07-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-language models (VLMs) for long-video understanding suffer from susceptibility to irrelevant frames and inefficient utilization of extended visual context. Method: We propose a Temporal Chain-of-Thought reasoning strategy, wherein the model autonomously iterates to select salient frames and dynamically constructs a compact, highly relevant visual context. This approach integrates adaptive frame extraction with inference-time computational expansion—without architectural modification—to overcome fixed context-length constraints. Contribution/Results: Our method achieves state-of-the-art performance on four long-video question-answering benchmarks. Notably, when processing videos exceeding one hour in duration under a 32K-context window, it attains a 2.8-percentage-point accuracy gain over a 700K-context baseline, demonstrating substantial improvements in both long-horizon temporal reasoning efficiency and robustness.

Technology Category

Application Category

📝 Abstract
Despite recent advances in Vision-Language Models (VLMs), long-video understanding remains a challenging problem. Although state-of-the-art long-context VLMs can process around 1000 input frames, they still struggle to effectively leverage this sequence length, and succumb to irrelevant distractors within the context window. We present Temporal Chain of Thought, an inference strategy for video question-answering that curates the model's input context. We use the VLM itself to iteratively identify and extract the most relevant frames from the video, which are then used for answering. We demonstrate how leveraging more computation at inference-time to select the most relevant context leads to improvements in accuracy, in agreement with recent work on inference-time scaling of LLMs. Moreover, we achieve state-of-the-art results on 4 diverse video question-answering datasets, showing consistent improvements with 3 different VLMs. In particular, our method shines on longer videos which would not otherwise fit within the model's context window: On longer videos of more than 1 hour on LVBench, our approach using a context window of 32K outperforms the same VLM using standard inference with a 700K context window by 2.8 points.
Problem

Research questions and friction points this paper is trying to address.

Improving long-video understanding with VLMs
Selecting relevant frames for video QA
Enhancing accuracy via inference-time context curation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Iteratively extracts relevant video frames
Uses VLM for context selection
Improves accuracy with inference-time scaling
🔎 Similar Papers
No similar papers found.