🤖 AI Summary
Existing video benchmarks lack fine-grained evaluation of Chain-of-Thought (CoT) reasoning processes, making it difficult to disentangle perceptual deficits from reasoning failures. Method: We introduce the first rigorous evaluation framework for video-based CoT reasoning—VideoCoT—featuring a benchmark of 859 videos and 1,034 question-answer pairs, each accompanied by human-annotated, stepwise CoT chains explicitly labeled for perceptual or reasoning attributes. We propose a novel decoupled evaluation paradigm, with seven task dimensions and step-level CoT scoring metrics. Results: Experiments reveal severe limitations in current large vision-language models’ (LVLMs’) video CoT capabilities (best model: only 62.8% accuracy), with perceptual steps significantly underperforming reasoning steps—systematically identifying spatiotemporal perception as the critical bottleneck. The framework is both diagnostic and extensible, establishing a new standard for evaluating video reasoning.
📝 Abstract
The advancement of Chain-of-Thought (CoT) reasoning has significantly enhanced the capabilities of large language models (LLMs) and large vision-language models (LVLMs). However, a rigorous evaluation framework for video CoT reasoning remains absent. Current video benchmarks fail to adequately assess the reasoning process and expose whether failures stem from deficiencies in perception or reasoning capabilities. Therefore, we introduce VCR-Bench, a novel benchmark designed to comprehensively evaluate LVLMs' Video Chain-of-Thought Reasoning capabilities. VCR-Bench comprises 859 videos spanning a variety of video content and durations, along with 1,034 high-quality question-answer pairs. Each pair is manually annotated with a stepwise CoT rationale, where every step is tagged to indicate its association with the perception or reasoning capabilities. Furthermore, we design seven distinct task dimensions and propose the CoT score to assess the entire CoT process based on the stepwise tagged CoT rationals. Extensive experiments on VCR-Bench highlight substantial limitations in current LVLMs. Even the top-performing model, o1, only achieves a 62.8% CoT score and an 56.7% accuracy, while most models score below 40%. Experiments show most models score lower on perception than reasoning steps, revealing LVLMs' key bottleneck in temporal-spatial information processing for complex video reasoning. A robust positive correlation between the CoT score and accuracy confirms the validity of our evaluation framework and underscores the critical role of CoT reasoning in solving complex video reasoning tasks. We hope VCR-Bench to serve as a standardized evaluation framework and expose the actual drawbacks in complex video reasoning task.