🤖 AI Summary
Vision-language models (VLMs) for long-video understanding suffer from susceptibility to irrelevant frames and inefficient utilization of extended visual context. Method: We propose a Temporal Chain-of-Thought reasoning strategy, wherein the model autonomously iterates to select salient frames and dynamically constructs a compact, highly relevant visual context. This approach integrates adaptive frame extraction with inference-time computational expansion—without architectural modification—to overcome fixed context-length constraints. Contribution/Results: Our method achieves state-of-the-art performance on four long-video question-answering benchmarks. Notably, when processing videos exceeding one hour in duration under a 32K-context window, it attains a 2.8-percentage-point accuracy gain over a 700K-context baseline, demonstrating substantial improvements in both long-horizon temporal reasoning efficiency and robustness.
📝 Abstract
Despite recent advances in Vision-Language Models (VLMs), long-video understanding remains a challenging problem. Although state-of-the-art long-context VLMs can process around 1000 input frames, they still struggle to effectively leverage this sequence length, and succumb to irrelevant distractors within the context window. We present Temporal Chain of Thought, an inference strategy for video question-answering that curates the model's input context. We use the VLM itself to iteratively identify and extract the most relevant frames from the video, which are then used for answering. We demonstrate how leveraging more computation at inference-time to select the most relevant context leads to improvements in accuracy, in agreement with recent work on inference-time scaling of LLMs. Moreover, we achieve state-of-the-art results on 4 diverse video question-answering datasets, showing consistent improvements with 3 different VLMs. In particular, our method shines on longer videos which would not otherwise fit within the model's context window: On longer videos of more than 1 hour on LVBench, our approach using a context window of 32K outperforms the same VLM using standard inference with a 700K context window by 2.8 points.