VCR-Bench: A Comprehensive Evaluation Framework for Video Chain-of-Thought Reasoning

📅 2025-04-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video benchmarks lack fine-grained evaluation of Chain-of-Thought (CoT) reasoning processes, making it difficult to disentangle perceptual deficits from reasoning failures. Method: We introduce the first rigorous evaluation framework for video-based CoT reasoning—VideoCoT—featuring a benchmark of 859 videos and 1,034 question-answer pairs, each accompanied by human-annotated, stepwise CoT chains explicitly labeled for perceptual or reasoning attributes. We propose a novel decoupled evaluation paradigm, with seven task dimensions and step-level CoT scoring metrics. Results: Experiments reveal severe limitations in current large vision-language models’ (LVLMs’) video CoT capabilities (best model: only 62.8% accuracy), with perceptual steps significantly underperforming reasoning steps—systematically identifying spatiotemporal perception as the critical bottleneck. The framework is both diagnostic and extensible, establishing a new standard for evaluating video reasoning.

Technology Category

Application Category

📝 Abstract
The advancement of Chain-of-Thought (CoT) reasoning has significantly enhanced the capabilities of large language models (LLMs) and large vision-language models (LVLMs). However, a rigorous evaluation framework for video CoT reasoning remains absent. Current video benchmarks fail to adequately assess the reasoning process and expose whether failures stem from deficiencies in perception or reasoning capabilities. Therefore, we introduce VCR-Bench, a novel benchmark designed to comprehensively evaluate LVLMs' Video Chain-of-Thought Reasoning capabilities. VCR-Bench comprises 859 videos spanning a variety of video content and durations, along with 1,034 high-quality question-answer pairs. Each pair is manually annotated with a stepwise CoT rationale, where every step is tagged to indicate its association with the perception or reasoning capabilities. Furthermore, we design seven distinct task dimensions and propose the CoT score to assess the entire CoT process based on the stepwise tagged CoT rationals. Extensive experiments on VCR-Bench highlight substantial limitations in current LVLMs. Even the top-performing model, o1, only achieves a 62.8% CoT score and an 56.7% accuracy, while most models score below 40%. Experiments show most models score lower on perception than reasoning steps, revealing LVLMs' key bottleneck in temporal-spatial information processing for complex video reasoning. A robust positive correlation between the CoT score and accuracy confirms the validity of our evaluation framework and underscores the critical role of CoT reasoning in solving complex video reasoning tasks. We hope VCR-Bench to serve as a standardized evaluation framework and expose the actual drawbacks in complex video reasoning task.
Problem

Research questions and friction points this paper is trying to address.

Lack of rigorous evaluation for video Chain-of-Thought reasoning
Current benchmarks fail to assess perception vs reasoning failures
Need standardized framework to expose video reasoning model limitations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces VCR-Bench for video CoT evaluation
Uses stepwise tagged CoT rationals
Proposes CoT score for assessment
🔎 Similar Papers
No similar papers found.
Yukun Qi
Yukun Qi
中国科学技术大学
Y
Yiming Zhao
University of Science and Technology of China, Huawei Noah’s Ark Lab
Y
Yu Zeng
University of Science and Technology of China, Huawei Noah’s Ark Lab
X
Xikun Bao
University of Science and Technology of China, Huawei Noah’s Ark Lab
Wenxuan Huang
Wenxuan Huang
CUHK & ECNU
Artificial General IntelligenceMLLMLLMAIGCModel Acceleration
L
Lin Chen
University of Science and Technology of China
Zehui Chen
Zehui Chen
USTC
J
Jie Zhao
Huawei Noah’s Ark Lab
Z
Zhongang Qi
Huawei Noah’s Ark Lab
F
Feng Zhao
University of Science and Technology of China