Fostering Video Reasoning via Next-Event Prediction

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) exhibit weak temporal reasoning capabilities for video understanding, heavily rely on costly human annotations, and often conflate spatiotemporal information. Method: This paper proposes a fully self-supervised Next Event Prediction (NEP) task, leveraging future video segments as natural supervisory signals to train models to infer and generate concise event summaries from historical frames. Contribution/Results: We formally define the NEP task for the first time; construct V1-33K, the first large-scale, automatically segmented video dataset comprising 33K video clips; and release FutureBench, a dedicated benchmark for evaluating future event prediction. By integrating video segmentation modeling, multimodal instruction tuning, and self-supervised event summarization generation, our approach achieves an average 27.4% improvement over baselines on FutureBench. It significantly enhances the coherence and plausibility of predictions for unseen future events, demonstrating NEP as a scalable and efficient paradigm for training temporal reasoning in MLLMs.

Technology Category

Application Category

📝 Abstract
Next-token prediction serves as the foundational learning task enabling reasoning in LLMs. But what should the learning task be when aiming to equip MLLMs with temporal reasoning capabilities over video inputs? Existing tasks such as video question answering often rely on annotations from humans or much stronger MLLMs, while video captioning tends to entangle temporal reasoning with spatial information. To address this gap, we propose next-event prediction (NEP), a learning task that harnesses future video segments as a rich, self-supervised signal to foster temporal reasoning. We segment each video into past and future frames: the MLLM takes the past frames as input and predicts a summary of events derived from the future frames, thereby encouraging the model to reason temporally in order to complete the task. To support this task, we curate V1-33K, a dataset comprising 33,000 automatically extracted video segments spanning diverse real-world scenarios. We further explore a range of video instruction-tuning strategies to study their effects on temporal reasoning. To evaluate progress, we introduce FutureBench to assess coherence in predicting unseen future events. Experiments validate that NEP offers a scalable and effective training paradigm for fostering temporal reasoning in MLLMs.
Problem

Research questions and friction points this paper is trying to address.

Defining a learning task for temporal reasoning in MLLMs with videos
Proposing next-event prediction to foster self-supervised temporal reasoning
Creating datasets and benchmarks to evaluate future event prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes next-event prediction for temporal reasoning
Uses self-supervised future video segments
Introduces V1-33K dataset for training
H
Haonan Wang
National University of Singapore
H
Hongfu Liu
National University of Singapore
Xiangyan Liu
Xiangyan Liu
National University of Singapore
AILarge Language Models
C
Chao Du
Sea AI Lab, Singapore
Kenji Kawaguchi
Kenji Kawaguchi
Presidential Young Professor, National University of Singapore
LLMsLarge language modelDeep learningAI
Y
Ye Wang
National University of Singapore
T
Tianyu Pang
Sea AI Lab, Singapore