π€ AI Summary
Existing scene graph forecasting methods over-rely on visual cues while lacking integration of commonsense knowledge, resulting in poor robustness for long-horizon prediction. To address this, we propose a decoupled, two-stage language-based framework: Stage I predicts object existence dynamics using the Action Genome dataset, and Stage II leverages large language models (LLMs) to generate humanβobject relationships, enabling sequential, structured reasoning for future scene graphs. This work is the first to formulate scene graph forecasting as an object-centric, two-stage language generation task, advancing the deep integration of LLMs into structured visual understanding. We introduce a fine-grained textual evaluation benchmark supporting both zero-shot and fine-tuned validation. Experiments demonstrate average recall improvements of 3.4% for short-horizon and 21.9% for long-horizon forecasting, achieving state-of-the-art performance.
π Abstract
A scene graph is a structured represention of objects and their relationships in a scene. Scene Graph Anticipation (SGA) involves predicting future scene graphs from video clips, enabling applications as intelligent surveillance and human-machine collaboration. Existing SGA approaches primarily leverage visual cues, often struggling to integrate valuable commonsense knowledge, thereby limiting long-term prediction robustness. To explicitly leverage such commonsense knowledge, we propose a new approach to better understand the objects, concepts, and relationships in a scene graph. Our approach decouples the SGA task in two steps: first a scene graph capturing model is used to convert a video clip into a sequence of scene graphs, then a pure text-based model is used to predict scene graphs in future frames. Our focus in this work is on the second step, and we call it Linguistic Scene Graph Anticipation (LSGA) and believes it should have independent interest beyond the use in SGA discussed here. For LSGA, we introduce an Object-Oriented Two-Staged Method (OOTSM) where an Large Language Model (LLM) first forecasts object appearances and disappearances before generating detailed human-object relations. We conduct extensive experiments to evaluate OOTSM in two settings. For LSGA, we evaluate our fine-tuned open-sourced LLMs against zero-shot APIs (i.e., GPT-4o, GPT-4o-mini, and DeepSeek-V3) on a benchmark constructed from Action Genome annotations. For SGA, we combine our OOTSM with STTran++ from, and our experiments demonstrate effective state-of-the-art performance: short-term mean-Recall (@10) increases by 3.4% while long-term mean-Recall (@50) improves dramatically by 21.9%. Code is available at https://github.com/ZhuXMMM/OOTSM.