Towards Spatio-Temporal World Scene Graph Generation from Monocular Videos

📅 2026-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing video scene graph methods, which are confined to objects visible in the current frame and struggle to model temporarily occluded entities or maintain temporal consistency in a 3D world coordinate system. To overcome these challenges, we introduce the World Scene Graph Generation (WSGG) task, which aims to construct a spatiotemporally consistent, world-coordinate-aligned scene graph from monocular videos, encompassing all interacting objects—including those not currently observed. We present the ActionGenome4D dataset and propose three approaches with distinct inductive biases, incorporating feedforward 3D reconstruction, world-coordinate bounding boxes, object persistence buffers, temporal attention enhanced by 3D motion and camera pose, and a Graph RAG–driven vision-language model. Experiments validate the effectiveness of our framework, establishing the first baseline for unlocalized relationship prediction and advancing video scene understanding toward explainable, world-centric, and temporally coherent representations.

Technology Category

Application Category

📝 Abstract
Spatio-temporal scene graphs provide a principled representation for modeling evolving object interactions, yet existing methods remain fundamentally frame-centric: they reason only about currently visible objects, discard entities upon occlusion, and operate in 2D. To address this, we first introduce ActionGenome4D, a dataset that upgrades Action Genome videos into 4D scenes via feed-forward 3D reconstruction, world-frame oriented bounding boxes for every object involved in actions, and dense relationship annotations including for objects that are temporarily unobserved due to occlusion or camera motion. Building on this data, we formalize World Scene Graph Generation (WSGG), the task of constructing a world scene graph at each timestamp that encompasses all interacting objects in the scene, both observed and unobserved. We then propose three complementary methods, each exploring a different inductive bias for reasoning about unobserved objects: PWG (Persistent World Graph), which implements object permanence via a zero-order feature buffer; MWAE (Masked World Auto-Encoder), which reframes unobserved-object reasoning as masked completion with cross-view associative retrieval; and 4DST (4D Scene Transformer), which replaces the static buffer with differentiable per-object temporal attention enriched by 3D motion and camera-pose features. We further design and evaluate the performance of strong open-source Vision-Language Models on the WSGG task via a suite of Graph RAG-based approaches, establishing baselines for unlocalized relationship prediction. WSGG thus advances video scene understanding toward world-centric, temporally persistent, and interpretable scene reasoning.
Problem

Research questions and friction points this paper is trying to address.

Spatio-temporal scene graph
World Scene Graph Generation
occlusion handling
monocular video understanding
object permanence
Innovation

Methods, ideas, or system contributions that make the work stand out.

World Scene Graph Generation
4D Scene Understanding
Object Permanence
Spatio-Temporal Reasoning
Masked Scene Completion
🔎 Similar Papers
No similar papers found.