🤖 AI Summary
Existing benchmarks lack systematic evaluation of multimodal large language models’ (MLLMs) video reasoning error identification and correction capabilities. To address this, we propose ViRectify—the first dedicated benchmark for video reasoning rectification, comprising over 30K instances spanning dynamic perception, scientific reasoning, and embodied decision-making. We introduce a trajectory-evidence-driven rectification framework that employs stepwise error modeling and visual evidence-aligned reward mechanisms to expose systemic asymmetries in MLLMs’ error propagation and correction behavior. The dataset is constructed via AI-assisted annotation followed by rigorous human verification; our method integrates incremental error detection, evidence grounding, and reinforcement learning–based incentives. Evaluated on 16 state-of-the-art MLLMs, ViRectify demonstrates high difficulty (e.g., GPT-5 achieves only 31.94% accuracy) and reveals counterintuitive performance patterns—e.g., Qwen2.5-VL-7B consistently outperforms its 72B counterpart—highlighting the untapped potential of lightweight models for efficient error rectification.
📝 Abstract
As multimodal large language models (MLLMs) frequently exhibit errors in complex video reasoning scenarios, correcting these errors is critical for uncovering their weaknesses and improving performance. However, existing benchmarks lack systematic evaluation of MLLMs' ability to identify and correct these video reasoning errors. To bridge this gap, we propose extit{ViRectify}, a comprehensive benchmark to evaluate their fine-grained correction capability. Through an AI-assisted annotation pipeline with human verification, we construct a dataset of over 30 extit{K} instances spanning dynamic perception, scientific reasoning, and embodied decision-making domains. In extit{ViRectify}, we challenge MLLMs to perform step-wise error identification and generate rationales with key video evidence grounding. In addition, we further propose the trajectory evidence-driven correction framework, comprising step-wise error trajectory and reward modeling on visual evidence-grounded correction. It encourages the model to explicitly concentrate on error propagation and key timestamps for correction. Extensive evaluation across 16 advanced MLLMs demonstrates that our extit{ViRectify} serves as a challenging testbed, where GPT-5 achieves only 31.94% correction accuracy. Our framework enables a Qwen2.5-VL-7B to consistently outperform the variants of 72B on extit{ViRectify}, showing the effectiveness of our approach. Further analysis uncovers systematic asymmetries in error correction across models, and our dataset is also a valuable data resource to perform reflection learning. We believe extit{ViRectify} provides a new direction for comprehensively evaluating the advanced MLLMs in video reasoning.