MOSS-ChatV: Reinforcement Learning with Process Reasoning Reward for Video Temporal Reasoning

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In video temporal reasoning, multimodal large language models (MLLMs) frequently suffer from process inconsistency—where intermediate reasoning steps diverge from the underlying video dynamics—undermining interpretability and robustness. To address this, we propose a dynamic time warping (DTW)-based process reward mechanism that enables fine-grained supervision of reasoning trajectories without requiring auxiliary reward models. We further introduce MOSS-Video, the first benchmark featuring human-annotated reasoning trajectories for video temporal reasoning. Our method is architecture-agnostic and integrates seamlessly with mainstream MLLMs, optimizing reasoning coherence end-to-end via reinforcement learning. On MOSS-Video, it achieves 87.2% accuracy; substantial gains are also observed on general-purpose benchmarks including MVBench and MMVU, alongside markedly improved inference stability and consistency. Key contributions include: (1) the first parameter-free DTW-based process reward, (2) the release of a high-quality, human-annotated reasoning trajectory benchmark, and (3) empirical validation that process consistency critically enhances generalization performance.

Technology Category

Application Category

📝 Abstract
Video reasoning has emerged as a critical capability for multimodal large language models (MLLMs), requiring models to move beyond static perception toward coherent understanding of temporal dynamics in complex scenes. Yet existing MLLMs often exhibit process inconsistency, where intermediate reasoning drifts from video dynamics even when the final answer is correct, undermining interpretability and robustness. To address this issue, we introduce MOSS-ChatV, a reinforcement learning framework with a Dynamic Time Warping (DTW)-based process reward. This rule-based reward aligns reasoning traces with temporally grounded references, enabling efficient process supervision without auxiliary reward models. We further identify dynamic state prediction as a key measure of video reasoning and construct MOSS-Video, a benchmark with annotated reasoning traces, where the training split is used to fine-tune MOSS-ChatV and the held-out split is reserved for evaluation. MOSS-ChatV achieves 87.2% on MOSS-Video (test) and improves performance on general video benchmarks such as MVBench and MMVU. The framework consistently yields gains across different architectures, including Qwen2.5-VL and Phi-2, confirming its broad applicability. Evaluations with GPT-4o-as-judge further show that MOSS-ChatV produces more consistent and stable reasoning traces.
Problem

Research questions and friction points this paper is trying to address.

Addressing process inconsistency in video reasoning for multimodal language models
Aligning intermediate reasoning traces with temporal video dynamics
Improving interpretability and robustness of video temporal understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning with Dynamic Time Warping reward
Process reasoning reward aligns reasoning with video dynamics
Rule-based process supervision without auxiliary reward models
🔎 Similar Papers
No similar papers found.