🤖 AI Summary
Existing video understanding benchmarks suffer from critical flaws: models often answer questions correctly using only static frames, textual cues, or commonsense knowledge—bypassing genuine temporal reasoning. To address this, we introduce TVBench, the first open-source multiple-choice QA benchmark explicitly designed to evaluate fine-grained video temporal understanding. Our approach systematically exposes three pervasive weaknesses in mainstream benchmarks: static-frame dependency, textual cue leakage, and world-knowledge bias. TVBench enforces spatiotemporal integration through frame-level perturbations, textual information ablation, adversarial question design, and rigorous human validation. Empirical evaluation reveals that most state-of-the-art models perform near chance level (~25%), while only Qwen2-VL and Tarsier significantly surpass the baseline—demonstrating TVBench’s rigor, discriminative power, and heightened challenge for temporal reasoning.
📝 Abstract
Large language models have demonstrated impressive performance when integrated with vision models even enabling video understanding. However, evaluating these video models presents its own unique challenges, for which several benchmarks have been proposed. In this paper, we show that the currently most used video-language benchmarks can be solved without requiring much temporal reasoning. We identified three main issues in existing datasets: (i) static information from single frames is often sufficient to solve the tasks (ii) the text of the questions and candidate answers is overly informative, allowing models to answer correctly without relying on any visual input (iii) world knowledge alone can answer many of the questions, making the benchmarks a test of knowledge replication rather than visual reasoning. In addition, we found that open-ended question-answering benchmarks for video understanding suffer from similar issues while the automatic evaluation process with LLMs is unreliable, making it an unsuitable alternative. As a solution, we propose TVBench, a novel open-source video multiple-choice question-answering benchmark, and demonstrate through extensive evaluations that it requires a high level of temporal understanding. Surprisingly, we find that most recent state-of-the-art video-language models perform similarly to random performance on TVBench, with only a few models such as Qwen2-VL, and Tarsier clearly surpassing this baseline.