🤖 AI Summary
Open-world 3D scene understanding is hindered by closed-vocabulary supervision and static annotations, limiting dynamic and long-tailed semantic reasoning. To address this, we propose the first unified framework integrating vision-language models (VLMs) with retrieval-augmented generation (RAG) for category-agnostic dynamic scene graph generation and multimodal interactive reasoning. Our key contributions are: (1) a 3D-aware open-vocabulary scene graph generation mechanism that grounds semantics directly in geometric and visual features; and (2) a vector-database-backed cross-modal RAG pipeline enabling language-guided object localization, relational inference, and task planning. Evaluated on 3DSSG and Replica, our method achieves significant improvements over state-of-the-art approaches across four tasks—scene question answering, visual grounding, instance retrieval, and task planning—demonstrating strong generalization and scalability to unseen categories and complex interactions.
📝 Abstract
Understanding 3D scenes in open-world settings poses fundamental challenges for vision and robotics, particularly due to the limitations of closed-vocabulary supervision and static annotations. To address this, we propose a unified framework for Open-World 3D Scene Graph Generation with Retrieval-Augmented Reasoning, which enables generalizable and interactive 3D scene understanding. Our method integrates Vision-Language Models (VLMs) with retrieval-based reasoning to support multimodal exploration and language-guided interaction. The framework comprises two key components: (1) a dynamic scene graph generation module that detects objects and infers semantic relationships without fixed label sets, and (2) a retrieval-augmented reasoning pipeline that encodes scene graphs into a vector database to support text/image-conditioned queries. We evaluate our method on 3DSSG and Replica benchmarks across four tasks-scene question answering, visual grounding, instance retrieval, and task planning-demonstrating robust generalization and superior performance in diverse environments. Our results highlight the effectiveness of combining open-vocabulary perception with retrieval-based reasoning for scalable 3D scene understanding.