extit{ViRectify}: A Challenging Benchmark for Video Reasoning Correction with Multimodal Large Language Models

📅 2025-12-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing benchmarks lack systematic evaluation of multimodal large language models’ (MLLMs) video reasoning error identification and correction capabilities. To address this, we propose ViRectify—the first dedicated benchmark for video reasoning rectification, comprising over 30K instances spanning dynamic perception, scientific reasoning, and embodied decision-making. We introduce a trajectory-evidence-driven rectification framework that employs stepwise error modeling and visual evidence-aligned reward mechanisms to expose systemic asymmetries in MLLMs’ error propagation and correction behavior. The dataset is constructed via AI-assisted annotation followed by rigorous human verification; our method integrates incremental error detection, evidence grounding, and reinforcement learning–based incentives. Evaluated on 16 state-of-the-art MLLMs, ViRectify demonstrates high difficulty (e.g., GPT-5 achieves only 31.94% accuracy) and reveals counterintuitive performance patterns—e.g., Qwen2.5-VL-7B consistently outperforms its 72B counterpart—highlighting the untapped potential of lightweight models for efficient error rectification.

Technology Category

Application Category

📝 Abstract
As multimodal large language models (MLLMs) frequently exhibit errors in complex video reasoning scenarios, correcting these errors is critical for uncovering their weaknesses and improving performance. However, existing benchmarks lack systematic evaluation of MLLMs' ability to identify and correct these video reasoning errors. To bridge this gap, we propose extit{ViRectify}, a comprehensive benchmark to evaluate their fine-grained correction capability. Through an AI-assisted annotation pipeline with human verification, we construct a dataset of over 30 extit{K} instances spanning dynamic perception, scientific reasoning, and embodied decision-making domains. In extit{ViRectify}, we challenge MLLMs to perform step-wise error identification and generate rationales with key video evidence grounding. In addition, we further propose the trajectory evidence-driven correction framework, comprising step-wise error trajectory and reward modeling on visual evidence-grounded correction. It encourages the model to explicitly concentrate on error propagation and key timestamps for correction. Extensive evaluation across 16 advanced MLLMs demonstrates that our extit{ViRectify} serves as a challenging testbed, where GPT-5 achieves only 31.94% correction accuracy. Our framework enables a Qwen2.5-VL-7B to consistently outperform the variants of 72B on extit{ViRectify}, showing the effectiveness of our approach. Further analysis uncovers systematic asymmetries in error correction across models, and our dataset is also a valuable data resource to perform reflection learning. We believe extit{ViRectify} provides a new direction for comprehensively evaluating the advanced MLLMs in video reasoning.
Problem

Research questions and friction points this paper is trying to address.

Evaluates multimodal large language models' ability to identify and correct video reasoning errors.
Addresses the lack of systematic benchmarks for fine-grained video error correction.
Focuses on dynamic perception, scientific reasoning, and embodied decision-making domains.
Innovation

Methods, ideas, or system contributions that make the work stand out.

AI-assisted annotation pipeline with human verification
Trajectory evidence-driven correction framework
Step-wise error identification with video evidence grounding
🔎 Similar Papers
No similar papers found.
X
Xusen Hei
South China University of technology
Jiali Chen
Jiali Chen
Apple
Machine Learning
J
Jinyu Yang
South China University of technology
Mengchen Zhao
Mengchen Zhao
South China University of Technology
Reinforcement LearningMulti-Agent SystemsGenerative Decision MakingLLM Agents
Y
Yi Cai
South China University of technology