🤖 AI Summary
Existing video understanding benchmarks inadequately characterize the true gap between large models and humans in terms of correctness and robustness. To address this, we propose Video-TT—the first comprehensive benchmark integrating open-ended question answering with multi-dimensional adversarial questions, built upon 1,000 YouTube Shorts videos to systematically evaluate models’ understanding of temporal structure, causal reasoning, and implicit semantics. Its key contributions are: (1) a novel adversarial question set covering natural perturbations, semantic ambiguities, and logical traps; and (2) human performance as the gold standard, enabling quantitative measurement of model bias. Extensive experiments reveal that current state-of-the-art video foundation models significantly underperform humans in both accuracy and stability—especially on long-horizon and counterfactual reasoning tasks. Video-TT provides a reproducible, scalable, and diagnostic benchmark for rigorous evaluation and targeted improvement of video understanding capabilities.
📝 Abstract
Human intelligence requires correctness and robustness, with the former being foundational for the latter. In video understanding, correctness ensures the accurate interpretation of visual content, and robustness maintains consistent performance in challenging conditions. Despite advances in video large language models (video LLMs), existing benchmarks inadequately reflect the gap between these models and human intelligence in maintaining correctness and robustness in video interpretation. We introduce the Video Thinking Test (Video-TT), to assess if video LLMs can interpret real-world videos as effectively as humans. Video-TT reflects genuine gaps in understanding complex visual narratives, and evaluates robustness against natural adversarial questions. Video-TT comprises 1,000 YouTube Shorts videos, each with one open-ended question and four adversarial questions that probe visual and narrative complexity. Our evaluation shows a significant gap between video LLMs and human performance.