DIFFVSGG: Diffusion-Driven Online Video Scene Graph Generation

📅 2025-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing video scene graph generation (VSGG) methods are predominantly offline, lacking support for real-time streaming input, exhibiting weak temporal modeling capability, and incurring high GPU memory overhead. This paper introduces latent diffusion models (LDMs) to online VSGG for the first time, proposing the first end-to-end online framework: (1) a unified homogeneous latent feature embedding jointly models object detection, relation prediction, and graph construction; (2) a temporal-conditional diffusion mechanism enables continuous spatiotemporal reasoning; and (3) a shared decoding head with joint latent-space representation learning significantly reduces computational redundancy. Evaluated across all three benchmark settings of Action Genome, our method outperforms offline state-of-the-art approaches while reducing GPU memory consumption by 42% and enabling real-time processing at 30 FPS.

Technology Category

Application Category

📝 Abstract
Top-leading solutions for Video Scene Graph Generation (VSGG) typically adopt an offline pipeline. Though demonstrating promising performance, they remain unable to handle real-time video streams and consume large GPU memory. Moreover, these approaches fall short in temporal reasoning, merely aggregating frame-level predictions over a temporal context. In response, we introduce DIFFVSGG, an online VSGG solution that frames this task as an iterative scene graph update problem. Drawing inspiration from Latent Diffusion Models (LDMs) which generate images via denoising a latent feature embedding, we unify the decoding of object classification, bounding box regression, and graph generation three tasks using one shared feature embedding. Then, given an embedding containing unified features of object pairs, we conduct a step-wise Denoising on it within LDMs, so as to deliver a clean embedding which clearly indicates the relationships between objects. This embedding then serves as the input to task-specific heads for object classification, scene graph generation, etc. DIFFVSGG further facilitates continuous temporal reasoning, where predictions for subsequent frames leverage results of past frames as the conditional inputs of LDMs, to guide the reverse diffusion process for current frames. Extensive experiments on three setups of Action Genome demonstrate the superiority of DIFFVSGG.
Problem

Research questions and friction points this paper is trying to address.

Real-time video scene graph generation
Reduced GPU memory consumption
Enhanced temporal reasoning in video analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online VSGG using iterative scene graph updates
Unified feature embedding for multiple tasks
Temporal reasoning via conditional LDM inputs
🔎 Similar Papers
No similar papers found.