🤖 AI Summary
Aligning natural language instructions with large-scale 3D scene graphs (3DSGs) remains challenging due to cross-modal semantic gaps and poor scalability of existing methods. Method: This paper proposes a retrieval-augmented generation (RAG)-driven language grounding framework. It constructs a structured graph interface leveraging a graph database and Cypher queries to dynamically retrieve semantically relevant subgraphs—thereby circumventing LLM context-length limitations—and decouples scene understanding from language reasoning to enable efficient synergy between 3DSGs and large language models. Contribution/Results: Evaluated on instruction-following and scene-question-answering tasks, our method achieves significant accuracy improvements while reducing token consumption by 42%–68%. It supports lightweight local deployment and cloud-based LLM collaboration, establishing a scalable, modular, and cross-modal paradigm for embodied language understanding in robotics.
📝 Abstract
In order to provide a robot with the ability to understand and react to a user's natural language inputs, the natural language must be connected to the robot's underlying representations of the world. Recently, large language models (LLMs) and 3D scene graphs (3DSGs) have become a popular choice for grounding natural language and representing the world. In this work, we address the challenge of using LLMs with 3DSGs to ground natural language. Existing methods encode the scene graph as serialized text within the LLM's context window, but this encoding does not scale to large or rich 3DSGs. Instead, we propose to use a form of Retrieval Augmented Generation to select a subset of the 3DSG relevant to the task. We encode a 3DSG in a graph database and provide a query language interface (Cypher) as a tool to the LLM with which it can retrieve relevant data for language grounding. We evaluate our approach on instruction following and scene question-answering tasks and compare against baseline context window and code generation methods. Our results show that using Cypher as an interface to 3D scene graphs scales significantly better to large, rich graphs on both local and cloud-based models. This leads to large performance improvements in grounded language tasks while also substantially reducing the token count of the scene graph content. A video supplement is available at https://www.youtube.com/watch?v=zY_YI9giZSA.