🤖 AI Summary
Existing video-language model (VLM) evaluation benchmarks are vulnerable to spurious visual or textual shortcuts, yielding inflated scores and failing to reliably assess spatiotemporal and physical reasoning capabilities. To address this, we introduce Minimal Video Pairs (MVP), a rigorous benchmark comprising 55K high-quality video-question-answer samples. MVP pioneers the “minimal-difference video pair” paradigm: for each question, two semantically similar videos yield opposite correct answers, compelling models to perform deep physical reasoning rather than exploit superficial cues. The benchmark integrates first- and third-person videos, robot interaction sequences, and cognitive-science-inspired intuitive physics data, and employs dual-sample joint evaluation with strict paired annotation. Human accuracy is 92.9%, while the strongest open-source VLM achieves only 40.2%—significantly above the 25% random baseline—demonstrating MVP’s unprecedented bias-resilience and discriminative power for evaluating physical understanding.
📝 Abstract
Existing benchmarks for assessing the spatio-temporal understanding and reasoning abilities of video language models are susceptible to score inflation due to the presence of shortcut solutions based on superficial visual or textual cues. This paper mitigates the challenges in accurately assessing model performance by introducing the Minimal Video Pairs (MVP) benchmark, a simple shortcut-aware video QA benchmark for assessing the physical understanding of video language models. The benchmark is comprised of 55K high-quality multiple-choice video QA examples focusing on physical world understanding. Examples are curated from nine video data sources, spanning first-person egocentric and exocentric videos, robotic interaction data, and cognitive science intuitive physics benchmarks. To mitigate shortcut solutions that rely on superficial visual or textual cues and biases, each sample in MVP has a minimal-change pair -- a visually similar video accompanied by an identical question but an opposing answer. To answer a question correctly, a model must provide correct answers for both examples in the minimal-change pair; as such, models that solely rely on visual or textual biases would achieve below random performance. Human performance on MVP is 92.9%, while the best open-source state-of-the-art video-language model achieves 40.2% compared to random performance at 25%.