🤖 AI Summary
Existing video understanding benchmarks focus exclusively on single-video analysis, failing to assess multimodal large language models’ (MLLMs) cross-video reasoning (CVR) capabilities. To address this gap, we introduce CrossVid—the first benchmark dedicated to evaluating spatiotemporal reasoning across multiple videos. CrossVid encompasses four high-level semantic dimensions and ten fine-grained tasks, comprising 5,331 videos and 9,015 question-answer pairs, supporting multiple-choice, multi-select, and open-ended generation formats. It systematically formalizes core CVR challenges: cross-video evidence integration, comparative analysis, and causal inference. Comprehensive evaluation across 12 state-of-the-art open- and closed-source MLLMs reveals severe limitations—average accuracy ≤50.4% (with Gemini-2.5-Pro achieving the highest score), underscoring fundamental deficits in multi-video collaborative understanding. CrossVid thus establishes a rigorous, structured, and reproducible evaluation standard for advancing CVR research.
📝 Abstract
Cross-Video Reasoning (CVR) presents a significant challenge in video understanding, which requires simultaneous understanding of multiple videos to aggregate and compare information across groups of videos. Most existing video understanding benchmarks focus on single-video analysis, failing to assess the ability of multimodal large language models (MLLMs) to simultaneously reason over various videos. Recent benchmarks evaluate MLLMs' capabilities on multi-view videos that capture different perspectives of the same scene. However, their limited tasks hinder a thorough assessment of MLLMs in diverse real-world CVR scenarios. To this end, we introduce CrossVid, the first benchmark designed to comprehensively evaluate MLLMs' spatial-temporal reasoning ability in cross-video contexts. Firstly, CrossVid encompasses a wide spectrum of hierarchical tasks, comprising four high-level dimensions and ten specific tasks, thereby closely reflecting the complex and varied nature of real-world video understanding. Secondly, CrossVid provides 5,331 videos, along with 9,015 challenging question-answering pairs, spanning single-choice, multiple-choice, and open-ended question formats. Through extensive experiments on various open-source and closed-source MLLMs, we observe that Gemini-2.5-Pro performs best on CrossVid, achieving an average accuracy of 50.4%. Notably, our in-depth case study demonstrates that most current MLLMs struggle with CVR tasks, primarily due to their inability to integrate or compare evidence distributed across multiple videos for reasoning. These insights highlight the potential of CrossVid to guide future advancements in enhancing MLLMs' CVR capabilities.