Open World Scene Graph Generation using Vision Language Models

📅 2025-06-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing scene graph generation (SGG) methods rely on closed-set supervised training, limiting generalization to unseen objects and relations in open-world settings—even vision-language models (VLMs) require fine-tuning. To address this, we propose the first training-free open-world SGG framework: (1) leveraging multimodal prompting to elicit zero-shot recognition capabilities from pretrained VLMs (e.g., CLIP, Flamingo); (2) introducing a cross-modal embedding alignment mechanism to unify visual and linguistic semantic spaces; and (3) designing a lightweight relation-pair optimization strategy for structured zero-shot inference. We further establish the first benchmark protocol for open-world SGG evaluation. Extensive experiments on Visual Genome, Open Images V6, and PSG demonstrate that our method significantly improves generalization to novel objects and relations without any training—effectively breaking the closed-set assumption inherent in conventional SGG approaches.

Technology Category

Application Category

📝 Abstract
Scene-Graph Generation (SGG) seeks to recognize objects in an image and distill their salient pairwise relationships. Most methods depend on dataset-specific supervision to learn the variety of interactions, restricting their usefulness in open-world settings, involving novel objects and/or relations. Even methods that leverage large Vision Language Models (VLMs) typically require benchmark-specific fine-tuning. We introduce Open-World SGG, a training-free, efficient, model-agnostic framework that taps directly into the pretrained knowledge of VLMs to produce scene graphs with zero additional learning. Casting SGG as a zero-shot structured-reasoning problem, our method combines multimodal prompting, embedding alignment, and a lightweight pair-refinement strategy, enabling inference over unseen object vocabularies and relation sets. To assess this setting, we formalize an Open-World evaluation protocol that measures performance when no SGG-specific data have been observed either in terms of objects and relations. Experiments on Visual Genome, Open Images V6, and the Panoptic Scene Graph (PSG) dataset demonstrate the capacity of pretrained VLMs to perform relational understanding without task-level training.
Problem

Research questions and friction points this paper is trying to address.

Recognizing objects and relationships in open-world settings
Leveraging pretrained VLMs without dataset-specific fine-tuning
Evaluating performance on unseen object and relation vocabularies
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages pretrained VLMs without fine-tuning
Uses multimodal prompting and embedding alignment
Lightweight pair-refinement for unseen objects
🔎 Similar Papers
No similar papers found.