CrossVid: A Comprehensive Benchmark for Evaluating Cross-Video Reasoning in Multimodal Large Language Models

📅 2025-11-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video understanding benchmarks focus exclusively on single-video analysis, failing to assess multimodal large language models’ (MLLMs) cross-video reasoning (CVR) capabilities. To address this gap, we introduce CrossVid—the first benchmark dedicated to evaluating spatiotemporal reasoning across multiple videos. CrossVid encompasses four high-level semantic dimensions and ten fine-grained tasks, comprising 5,331 videos and 9,015 question-answer pairs, supporting multiple-choice, multi-select, and open-ended generation formats. It systematically formalizes core CVR challenges: cross-video evidence integration, comparative analysis, and causal inference. Comprehensive evaluation across 12 state-of-the-art open- and closed-source MLLMs reveals severe limitations—average accuracy ≤50.4% (with Gemini-2.5-Pro achieving the highest score), underscoring fundamental deficits in multi-video collaborative understanding. CrossVid thus establishes a rigorous, structured, and reproducible evaluation standard for advancing CVR research.

Technology Category

Application Category

📝 Abstract
Cross-Video Reasoning (CVR) presents a significant challenge in video understanding, which requires simultaneous understanding of multiple videos to aggregate and compare information across groups of videos. Most existing video understanding benchmarks focus on single-video analysis, failing to assess the ability of multimodal large language models (MLLMs) to simultaneously reason over various videos. Recent benchmarks evaluate MLLMs' capabilities on multi-view videos that capture different perspectives of the same scene. However, their limited tasks hinder a thorough assessment of MLLMs in diverse real-world CVR scenarios. To this end, we introduce CrossVid, the first benchmark designed to comprehensively evaluate MLLMs' spatial-temporal reasoning ability in cross-video contexts. Firstly, CrossVid encompasses a wide spectrum of hierarchical tasks, comprising four high-level dimensions and ten specific tasks, thereby closely reflecting the complex and varied nature of real-world video understanding. Secondly, CrossVid provides 5,331 videos, along with 9,015 challenging question-answering pairs, spanning single-choice, multiple-choice, and open-ended question formats. Through extensive experiments on various open-source and closed-source MLLMs, we observe that Gemini-2.5-Pro performs best on CrossVid, achieving an average accuracy of 50.4%. Notably, our in-depth case study demonstrates that most current MLLMs struggle with CVR tasks, primarily due to their inability to integrate or compare evidence distributed across multiple videos for reasoning. These insights highlight the potential of CrossVid to guide future advancements in enhancing MLLMs' CVR capabilities.
Problem

Research questions and friction points this paper is trying to address.

Evaluating multimodal models' ability to reason across multiple videos simultaneously
Assessing spatial-temporal reasoning in diverse cross-video understanding scenarios
Addressing limitations of existing benchmarks in comprehensive cross-video evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces CrossVid benchmark for cross-video reasoning
Provides hierarchical tasks across four dimensions and ten tasks
Includes 5331 videos with 9015 challenging question-answer pairs
🔎 Similar Papers
No similar papers found.
Jingyao Li
Jingyao Li
The Chinese University of Hong Kong
Large Language ModelsMachine Learning
J
Jingyun Wang
Xiaohongshu Inc., China
M
Molin Tan
Xiaohongshu Inc., China
H
Haochen Wang
Xiaohongshu Inc., China
C
Cilin Yan
Xiaohongshu Inc., China
L
Likun Shi
Xiaohongshu Inc., China
J
Jiayin Cai
Xiaohongshu Inc., China
X
Xiaolong Jiang
Xiaohongshu Inc., China
Yao Hu
Yao Hu
浙江大学
Machine Learning