π€ AI Summary
This work addresses the limitation of existing video large language models in explicitly modeling temporal coherence and inter-frame causal relationships during reinforcement learning-based post-training, which hinders their ability to capture fine-grained dynamic changes. To this end, the paper introduces Masked Video Prediction (MVP) as a novel self-supervised post-training objective: by reconstructing masked consecutive video segments from distractors, MVP compels the model to learn temporal logic and contextual dependencies underlying visual events. The approach integrates a scalable synthetic video data pipeline with a reinforcement learning framework based on Group Relative Policy Optimization (GRPO) to explicitly enhance the modelβs understanding of temporal causal structures. Experiments demonstrate that MVP significantly improves performance in video reasoning, temporal understanding, and causal modeling, effectively achieving joint enhancement of video semantics and dynamic details.
π Abstract
Reinforcement learning based post-training paradigms for Video Large Language Models (VideoLLMs) have achieved significant success by optimizing for visual-semantic tasks such as captioning or VideoQA. However, while these approaches effectively enhance perception abilities, they primarily target holistic content understanding, often lacking explicit supervision for intrinsic temporal coherence and inter-frame correlations. This tendency limits the models'ability to capture intricate dynamics and fine-grained visual causality. To explicitly bridge this gap, we propose a novel post-training objective: Masked Video Prediction (MVP). By requiring the model to reconstruct a masked continuous segment from a set of challenging distractors, MVP forces the model to attend to the sequential logic and temporal context of events. To support scalable training, we introduce a scalable data synthesis pipeline capable of transforming arbitrary video corpora into MVP training samples, and further employ Group Relative Policy Optimization (GRPO) with a fine-grained reward function to enhance the model's understanding of video context and temporal properties. Comprehensive evaluations demonstrate that MVP enhances video reasoning capabilities by directly reinforcing temporal reasoning and causal understanding.