Holistic Understanding of 3D Scenes as Universal Scene Description

๐Ÿ“… 2024-12-02
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 2
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the insufficient modeling of interactive and articulated objects in 3D scene understanding, this paper introduces the first high-quality USD-formatted indoor scene dataset (280 scenes) tailored for interaction and motion understanding. We systematically define and annotate part-level kinematic attributes (type, parameters, interactivity) and physical properties (mass, connectivity) โ€” the first such effort. We propose a USD-native unified scene representation paradigm and a multi-task learning architecture enabling end-to-end joint prediction of part segmentation and motion parameters. We release the first dedicated benchmark and a strong baseline model, supporting eight categories of high-precision semanticโ€“motion joint annotations. The dataset, benchmark, and code are fully open-sourced. Our work significantly advances 3D interactive scene understanding, achieving state-of-the-art performance and enabling applications in embodied AI and holographic scene reasoning for mixed reality.

Technology Category

Application Category

๐Ÿ“ Abstract
3D scene understanding is a long-standing challenge in computer vision and a key component in enabling mixed reality, wearable computing, and embodied AI. Providing a solution to these applications requires a multifaceted approach that covers scene-centric, object-centric, as well as interaction-centric capabilities. While there exist numerous datasets approaching the former two problems, the task of understanding interactable and articulated objects is underrepresented and only partly covered by current works. In this work, we address this shortcoming and introduce (1) an expertly curated dataset in the Universal Scene Description (USD) format, featuring high-quality manual annotations, for instance, segmentation and articulation on 280 indoor scenes; (2) a learning-based model together with a novel baseline capable of predicting part segmentation along with a full specification of motion attributes, including motion type, articulated and interactable parts, and motion parameters; (3) a benchmark serving to compare upcoming methods for the task at hand. Overall, our dataset provides 8 types of annotations - object and part segmentations, motion types, movable and interactable parts, motion parameters, connectivity, and object mass annotations. With its broad and high-quality annotations, the data provides the basis for holistic 3D scene understanding models. All data is provided in the USD format, allowing interoperability and easy integration with downstream tasks. We provide open access to our dataset, benchmark, and method's source code.
Problem

Research questions and friction points this paper is trying to address.

Understanding interactable and articulated objects in 3D scenes
Lack of standardized datasets for articulated object motion
Need for unified frameworks for part segmentation and motion prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Curated 3D dataset with manual annotations
Unified framework for part segmentation
Standardized scene representation format
๐Ÿ”Ž Similar Papers
Anna-Maria Halacheva
Anna-Maria Halacheva
Doctoral Researcher, INSAIT, Sofia University
computer vision3D visionroboticsmulti-modal AInatural language processing
Yang Miao
Yang Miao
INSAIT, Sofia University
Computer VisionRobotics
Jan-Nico Zaech
Jan-Nico Zaech
Research Scientist, INSAIT, Sofia University
Computer VisionRoboticsAutonomous SystemsQuantum Computer Vision
X
Xi Wang
INSAIT, Sofia University "St. Kliment Ohridski", Bulgaria, ETH Zurich, Switzerland
L
L. V. Gool
INSAIT, Sofia University "St. Kliment Ohridski", Bulgaria, ETH Zurich, Switzerland
D
D. Paudel
INSAIT, Sofia University "St. Kliment Ohridski", Bulgaria