EventFormer: A Node-graph Hierarchical Attention Transformer for Action-centric Video Event Prediction

📅 2025-10-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the lack of fine-grained semantic modeling and large-scale annotated data in video event prediction, this paper introduces Action-Centric Video Event Prediction (AVEP), a novel task, and establishes the first large-scale benchmark dataset comprising 35,000 videos and 178,000 event segments. It pioneers the integration of script-based event reasoning into video understanding by proposing a node-graph hierarchical attention Transformer architecture: at the node level, it fuses vision-language multimodal arguments; at the graph level, it models logical dependencies among events and coreferential relations among arguments, enabling structured event reasoning. Evaluated on AVEP, our method significantly outperforms state-of-the-art video prediction models and large vision-language models, demonstrating both the effectiveness of the proposed architecture and the high quality of the dataset. The code and dataset are publicly released.

Technology Category

Application Category

📝 Abstract
Script event induction, which aims to predict the subsequent event based on the context, is a challenging task in NLP, achieving remarkable success in practical applications. However, human events are mostly recorded and presented in the form of videos rather than scripts, yet there is a lack of related research in the realm of vision. To address this problem, we introduce AVEP (Action-centric Video Event Prediction), a task that distinguishes itself from existing video prediction tasks through its incorporation of more complex logic and richer semantic information. We present a large structured dataset, which consists of about $35K$ annotated videos and more than $178K$ video clips of event, built upon existing video event datasets to support this task. The dataset offers more fine-grained annotations, where the atomic unit is represented as a multimodal event argument node, providing better structured representations of video events. Due to the complexity of event structures, traditional visual models that take patches or frames as input are not well-suited for AVEP. We propose EventFormer, a node-graph hierarchical attention based video event prediction model, which can capture both the relationships between events and their arguments and the coreferencial relationships between arguments. We conducted experiments using several SOTA video prediction models as well as LVLMs on AVEP, demonstrating both the complexity of the task and the value of the dataset. Our approach outperforms all these video prediction models. We will release the dataset and code for replicating the experiments and annotations.
Problem

Research questions and friction points this paper is trying to address.

Predicting subsequent events from video context
Modeling complex event structures in videos
Addressing multimodal event argument relationships
Innovation

Methods, ideas, or system contributions that make the work stand out.

Node-graph hierarchical attention for video events
Multimodal event argument nodes as atomic units
Captures event-argument and coreference relationships
🔎 Similar Papers
No similar papers found.
Q
Qile Su
Beihang University, Beijing, China
S
Shoutai Zhu
Beihang University, Beijing, China
S
Shuai Zhang
University of Science and Technology Beijing, Beijing, China
Baoyu Liang
Baoyu Liang
Beihang University
Chao Tong
Chao Tong
School of Computer and Engineering, Beihang University
Mobile ComputingSocial ComputingComplex Networks