MVP: Enhancing Video Large Language Models via Self-supervised Masked Video Prediction

πŸ“… 2026-01-07
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limitation of existing video large language models in explicitly modeling temporal coherence and inter-frame causal relationships during reinforcement learning-based post-training, which hinders their ability to capture fine-grained dynamic changes. To this end, the paper introduces Masked Video Prediction (MVP) as a novel self-supervised post-training objective: by reconstructing masked consecutive video segments from distractors, MVP compels the model to learn temporal logic and contextual dependencies underlying visual events. The approach integrates a scalable synthetic video data pipeline with a reinforcement learning framework based on Group Relative Policy Optimization (GRPO) to explicitly enhance the model’s understanding of temporal causal structures. Experiments demonstrate that MVP significantly improves performance in video reasoning, temporal understanding, and causal modeling, effectively achieving joint enhancement of video semantics and dynamic details.

Technology Category

Application Category

πŸ“ Abstract
Reinforcement learning based post-training paradigms for Video Large Language Models (VideoLLMs) have achieved significant success by optimizing for visual-semantic tasks such as captioning or VideoQA. However, while these approaches effectively enhance perception abilities, they primarily target holistic content understanding, often lacking explicit supervision for intrinsic temporal coherence and inter-frame correlations. This tendency limits the models'ability to capture intricate dynamics and fine-grained visual causality. To explicitly bridge this gap, we propose a novel post-training objective: Masked Video Prediction (MVP). By requiring the model to reconstruct a masked continuous segment from a set of challenging distractors, MVP forces the model to attend to the sequential logic and temporal context of events. To support scalable training, we introduce a scalable data synthesis pipeline capable of transforming arbitrary video corpora into MVP training samples, and further employ Group Relative Policy Optimization (GRPO) with a fine-grained reward function to enhance the model's understanding of video context and temporal properties. Comprehensive evaluations demonstrate that MVP enhances video reasoning capabilities by directly reinforcing temporal reasoning and causal understanding.
Problem

Research questions and friction points this paper is trying to address.

Video Large Language Models
temporal coherence
inter-frame correlations
video reasoning
causal understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Masked Video Prediction
Video Large Language Models
Temporal Coherence
Self-supervised Learning
Group Relative Policy Optimization
πŸ”Ž Similar Papers
No similar papers found.
X
Xiaokun Sun
University of Science and Technology of China, State Key Laboratory of Cognitive Intelligence
Z
Zezhong Wu
University of Science and Technology of China, State Key Laboratory of Cognitive Intelligence
Z
Zewen Ding
University of Science and Technology of China, State Key Laboratory of Cognitive Intelligence
Linli Xu
Linli Xu
University of Science and Technology of China