🤖 AI Summary
Existing benchmarks for multi-step reasoning over long narrative videos neglect temporal dynamics and procedural correctness, lacking dedicated evaluation frameworks. Method: We introduce LongNarrativeBench—the first benchmark tailored for multi-step reasoning on long narrative videos—comprising 1,010 videos (avg. 1.6 hours), 9,468 multi-step QA pairs, and 30,292 timestamped reasoning steps. We propose a human-AI collaborative framework to generate spatiotemporally coherent reasoning chains and pioneer a process-level LLM-guided scoring mechanism for dual-dimensional evaluation (output correctness + reasoning fidelity). Our pipeline integrates seven reasoning task templates, multi-stage expert filtering, and inter-annotator consistency validation, yielding MCQ accuracy plus multidimensional reasoning-chain quality scores. Contribution/Results: Comprehensive evaluation across 12 LLMs and 16 VLMs reveals critical bottlenecks in long-horizon causal reasoning and step-wise consistency. LongNarrativeBench provides a reproducible benchmark and actionable insights for advancing video-language reasoning.
📝 Abstract
We present VRBench, the first long narrative video benchmark crafted for evaluating large models' multi-step reasoning capabilities, addressing limitations in existing evaluations that overlook temporal reasoning and procedural validity. It comprises 1,010 long videos (with an average duration of 1.6 hours), along with 9,468 human-labeled multi-step question-answering pairs and 30,292 reasoning steps with timestamps. These videos are curated via a multi-stage filtering process including expert inter-rater reviewing to prioritize plot coherence. We develop a human-AI collaborative framework that generates coherent reasoning chains, each requiring multiple temporally grounded steps, spanning seven types (e.g., event attribution, implicit inference). VRBench designs a multi-phase evaluation pipeline that assesses models at both the outcome and process levels. Apart from the MCQs for the final results, we propose a progress-level LLM-guided scoring metric to evaluate the quality of the reasoning chain from multiple dimensions comprehensively. Through extensive evaluations of 12 LLMs and 16 VLMs on VRBench, we undertake a thorough analysis and provide valuable insights that advance the field of multi-step reasoning.