Video-CoE: Reinforcing Video Event Prediction via Chain of Events

πŸ“… 2026-03-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Current multimodal large language models (MLLMs) exhibit limited fine-grained temporal modeling and logical reasoning capabilities in video event prediction tasks, alongside insufficient utilization of visual information. To address these limitations, this work proposes the Chain of Events (CoE) paradigm, which introduces an event-chain mechanism to explicitly model the logical dependencies between observed video content and future events through temporally structured event sequences. The approach is further enhanced by a multi-protocol training strategy that sharpens the model’s focus on relevant visual cues and strengthens its reasoning capacity. Experimental results demonstrate that the proposed method significantly outperforms state-of-the-art open-source and commercial MLLMs across multiple public benchmarks, establishing new performance records in video event prediction.

Technology Category

Application Category

πŸ“ Abstract
Despite advances in the application of MLLMs for various video tasks, video event prediction (VEP) remains relatively underexplored. VEP requires the model to perform fine-grained temporal modeling of videos and establish logical relationships between videos and future events, which current MLLMs still struggle with. In this work, we first present a comprehensive evaluation of current leading MLLMs on the VEP task, revealing the reasons behind their inaccurate predictions, including lack of logical reasoning ability for future events prediction and insufficient utilization of visual information. To address these challenges, we propose \textbf{C}hain \textbf{o}f \textbf{E}vents (\textbf{CoE}) paradigm, which constructs temporal event chains to implicitly enforce MLLM focusing on the visual content and the logical connections between videos and future events, incentivizing model's reasoning capability with multiple training protocols. Experimental results on public benchmarks demonstrate that our method outperforms both leading open-source and commercial MLLMs, establishing a new state-of-the-art on the VEP task. Codes and models will be released soon.
Problem

Research questions and friction points this paper is trying to address.

video event prediction
temporal modeling
logical reasoning
multimodal large language models
future event prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Chain of Events
Video Event Prediction
Multimodal Large Language Models
Temporal Reasoning
Logical Inference
πŸ”Ž Similar Papers
No similar papers found.
Q
Qile Su
AMAP, Alibaba Group
J
Jing Tang
AMAP, Alibaba Group
Rui Chen
Rui Chen
AMAP, Alibaba Group; Tsinghua University
Computer VisionPattern Recognition
L
Lei Sun
AMAP, Alibaba Group
X
Xiangxiang Chu
AMAP, Alibaba Group