🤖 AI Summary
This work addresses a critical limitation in current video multimodal large language models (MLLMs)—their poor fine-grained spatiotemporal understanding, particularly their inability to distinguish videos that differ only in temporal structure. To this end, the study introduces, for the first time, a hierarchical theory of temporal cognition from cognitive science to construct a diagnostic benchmark based on the minimal pair paradigm. This benchmark employs video pairs with identical static content but divergent temporal structures, integrated within a three-level cognitive framework—atomic events, event attributes, and event dependencies—to effectively disentangle linguistic priors and visual shortcuts. Comprising 600 instances (2,400 video–question pairs), the benchmark features complementary question design, fine-grained annotations, and an instance-level accuracy metric. Evaluation of over 20 state-of-the-art MLLMs reveals that even the best model achieves only 48.2% accuracy, far below human performance at 98.2%, underscoring its heavy reliance on static cues.
📝 Abstract
Fine-grained spatio-temporal understanding is essential for video reasoning and embodied AI. Yet, while Multimodal Large Language Models (MLLMs) master static semantics, their grasp of temporal dynamics remains brittle. We present TimeBlind, a diagnostic benchmark for compositional spatio-temporal understanding. Inspired by cognitive science, TimeBlind categorizes fine-grained temporal understanding into three levels: recognizing atomic events, characterizing event properties, and reasoning about event interdependencies. Unlike benchmarks that conflate recognition with temporal reasoning, TimeBlind leverages a minimal-pairs paradigm: video pairs share identical static visual content but differ solely in temporal structure, utilizing complementary questions to neutralize language priors. Evaluating over 20 state-of-the-art MLLMs (e.g., GPT-5, Gemini 3 Pro) on 600 curated instances (2400 video-question pairs), reveals that the Instance Accuracy (correctly distinguishing both videos in a pair) of the best performing MLLM is only 48.2%, far below the human performance (98.2%). These results demonstrate that even frontier models rely heavily on static visual shortcuts rather than genuine temporal logic, positioning TimeBlind as a vital diagnostic tool for next-generation video understanding. Dataset and code are available at https://baiqi-li.github.io/timeblind_project/ .