Real2USD: Scene Representations in Universal Scene Description Language

📅 2025-10-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM-driven robotic systems lack a unified, multimodal environment representation, hindering robust scene understanding and high-level task planning—particularly for challenging objects like glass. Method: This paper introduces the first Universal Scene Description (USD)-based modeling framework tailored for robotics, integrating LiDAR-based geometric reconstruction and RGB-based photometric–semantic perception to construct a structured USD scene graph; natural language parsing is performed using Google Gemini, and validation is conducted in Isaac Sim. Contribution/Results: We pioneer the systematic adoption of Pixar’s USD standard in robotics, enabling human-readable, extensible, and task-agnostic multimodal scene representations. Our framework significantly enhances LLMs’ comprehension of complex indoor environments and their capacity for abstract, long-horizon task planning. The implementation is open-sourced.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) can help robots reason about abstract task specifications. This requires augmenting classical representations of the environment used by robots with natural language-based priors. There are a number of existing approaches to doing so, but they are tailored to specific tasks, e.g., visual-language models for navigation, language-guided neural radiance fields for mapping, etc. This paper argues that the Universal Scene Description (USD) language is an effective and general representation of geometric, photometric and semantic information in the environment for LLM-based robotics tasks. Our argument is simple: a USD is an XML-based scene graph, readable by LLMs and humans alike, and rich enough to support essentially any task -- Pixar developed this language to store assets, scenes and even movies. We demonstrate a ``Real to USD'' system using a Unitree Go2 quadruped robot carrying LiDAR and a RGB camera that (i) builds an explicit USD representation of indoor environments with diverse objects and challenging settings with lots of glass, and (ii) parses the USD using Google's Gemini to demonstrate scene understanding, complex inferences, and planning. We also study different aspects of this system in simulated warehouse and hospital settings using Nvidia's Issac Sim. Code is available at https://github.com/grasp-lyrl/Real2USD .
Problem

Research questions and friction points this paper is trying to address.

Developing universal scene representation for LLM-based robotics tasks
Converting real-world sensor data into USD language format
Enabling complex scene understanding and planning through USD parsing
Innovation

Methods, ideas, or system contributions that make the work stand out.

USD as universal scene representation for robotics
Real-time USD generation from LiDAR and RGB data
LLM-based scene parsing for inference and planning
🔎 Similar Papers
No similar papers found.
C
Christopher D. Hsu
Department of Electrical & Systems Engineering and General Robotics, Automation, Sensing and Perception (GRASP) Laboratory at the University of Pennsylvania
Pratik Chaudhari
Pratik Chaudhari
University of Pennsylvania
Deep LearningMachine LearningRobotics