Generating Actionable Robot Knowledge Bases by Combining 3D Scene Graphs with Robot Ontologies

📅 2025-07-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In robotics, heterogeneous scene description formats—such as MJCF, URDF, and SDF—are mutually incompatible, severely impeding unified environmental knowledge modeling and semantic reasoning. Method: This paper proposes a Unified Scene Graph Model grounded in Universal Scene Description (USD), enabling standardized, semantics-preserving fusion of multi-source formats into a coherent USD representation. It introduces a robot-body-aligned semantic mapping and annotation framework to construct an executable, task-oriented knowledge base, integrated with ontology-based reasoning and Web-based visualization tools for semantic querying and environment management. Results: Experiments demonstrate automatic conversion of procedural 3D environments into semantically enriched USD scenes, generation of structured knowledge graphs, and real-time, interpretable robotic decision-making in capability-validation tasks. The approach significantly enhances cross-format environmental understanding and cognitive reasoning capabilities.

Technology Category

Application Category

📝 Abstract
In robotics, the effective integration of environmental data into actionable knowledge remains a significant challenge due to the variety and incompatibility of data formats commonly used in scene descriptions, such as MJCF, URDF, and SDF. This paper presents a novel approach that addresses these challenges by developing a unified scene graph model that standardizes these varied formats into the Universal Scene Description (USD) format. This standardization facilitates the integration of these scene graphs with robot ontologies through semantic reporting, enabling the translation of complex environmental data into actionable knowledge essential for cognitive robotic control. We evaluated our approach by converting procedural 3D environments into USD format, which is then annotated semantically and translated into a knowledge graph to effectively answer competency questions, demonstrating its utility for real-time robotic decision-making. Additionally, we developed a web-based visualization tool to support the semantic mapping process, providing users with an intuitive interface to manage the 3D environment.
Problem

Research questions and friction points this paper is trying to address.

Standardizing varied 3D scene formats into USD for robotics
Integrating scene graphs with robot ontologies via semantic reporting
Enabling real-time robotic decision-making through actionable knowledge graphs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified scene graph model standardizes formats into USD
Semantic reporting integrates scene graphs with robot ontologies
Web-based tool visualizes and manages 3D environment semantically
🔎 Similar Papers
No similar papers found.
G
Giang Nguyen
Institute for Artificial Intelligence, University of Bremen, Germany
Mihai Pomarlan
Mihai Pomarlan
Researcher, University of Bremen
AIKnowledge EngineeringRoboticsMotion PlanningSensor Fusion
S
Sascha Jongebloed
Institute for Artificial Intelligence, University of Bremen, Germany
N
Nils Leusmann
Institute for Artificial Intelligence, University of Bremen, Germany
Minh Nhat Vu
Minh Nhat Vu
Automation & Control Institute (ACIN), Vienna, Austria
Robotics
Michael Beetz
Michael Beetz
Intitute for Artificial Intelligence, Computer Science Department, University of Bremen
cognitive roboticsAI-based Roboticsplan-based controlsemantic perceptionknowledge processing for Robots