Video-R2: Reinforcing Consistent and Grounded Reasoning in Multimodal Language Models

📅 2025-11-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) frequently exhibit logical inconsistencies and weak visual grounding in video reasoning, leading to plausible yet unreliable hallucinations. To address this, we propose a reinforcement learning–based optimization framework tailored for dynamic video understanding. Our method introduces two diagnostic metrics—Think-Answer Consistency (TAC) and Video Attention Score (VAS)—to quantify reasoning consistency and visual grounding strength, respectively. We further design Timestamp-aligned Reward (TAR) and Grouped Relative Policy Optimization (GRPO), integrated with timestamp-aware supervised fine-tuning, enabling a two-stage post-training paradigm. Evaluated across 11 video reasoning benchmarks, our approach significantly improves TAC, VAS, and overall accuracy, while enhancing model reliance on visual content and ensuring coherent causal reasoning.

Technology Category

Application Category

📝 Abstract
Reasoning over dynamic visual content remains a central challenge for multimodal large language models. Recent thinking models generate explicit reasoning traces for interpretability; however, their reasoning often appears convincing while being logically inconsistent or weakly grounded in visual evidence. We identify and formalize these issues through two diagnostic metrics: Think Answer Consistency (TAC), which measures the alignment between reasoning and answers, and Video Attention Score (VAS), which captures the extent to which reasoning depends on visual versus textual cues. Analysis across 11 video reasoning benchmarks shows that current models rely heavily on linguistic priors rather than visual content. To address this, we propose a reinforcement learning approach that enhances both temporal precision and reasoning consistency. Our approach combines timestamp aware supervised fine tuning with Group Relative Policy Optimization (GRPO) guided by a novel Temporal Alignment Reward (TAR). This dual step post training stage encourages temporally aligned and causally coherent video reasoning. The resulting model, Video R2, achieves consistently higher TAC, VAS, and accuracy across multiple benchmarks, demonstrating that improvements in temporal alignment and reasoning coherence lead to more accurate and trustworthy video understanding. Our code, dataset, and model will be open sourced.
Problem

Research questions and friction points this paper is trying to address.

Addressing logical inconsistencies in multimodal video reasoning models
Reducing overreliance on linguistic priors over visual evidence
Improving temporal alignment and reasoning coherence in video understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning for temporal precision and consistency
Timestamp aware supervised fine tuning with GRPO
Temporal Alignment Reward for coherent video reasoning
🔎 Similar Papers
No similar papers found.