๐ค AI Summary
To address the insufficient modeling of interactive and articulated objects in 3D scene understanding, this paper introduces the first high-quality USD-formatted indoor scene dataset (280 scenes) tailored for interaction and motion understanding. We systematically define and annotate part-level kinematic attributes (type, parameters, interactivity) and physical properties (mass, connectivity) โ the first such effort. We propose a USD-native unified scene representation paradigm and a multi-task learning architecture enabling end-to-end joint prediction of part segmentation and motion parameters. We release the first dedicated benchmark and a strong baseline model, supporting eight categories of high-precision semanticโmotion joint annotations. The dataset, benchmark, and code are fully open-sourced. Our work significantly advances 3D interactive scene understanding, achieving state-of-the-art performance and enabling applications in embodied AI and holographic scene reasoning for mixed reality.
๐ Abstract
3D scene understanding is a long-standing challenge in computer vision and a key component in enabling mixed reality, wearable computing, and embodied AI. Providing a solution to these applications requires a multifaceted approach that covers scene-centric, object-centric, as well as interaction-centric capabilities. While there exist numerous datasets approaching the former two problems, the task of understanding interactable and articulated objects is underrepresented and only partly covered by current works. In this work, we address this shortcoming and introduce (1) an expertly curated dataset in the Universal Scene Description (USD) format, featuring high-quality manual annotations, for instance, segmentation and articulation on 280 indoor scenes; (2) a learning-based model together with a novel baseline capable of predicting part segmentation along with a full specification of motion attributes, including motion type, articulated and interactable parts, and motion parameters; (3) a benchmark serving to compare upcoming methods for the task at hand. Overall, our dataset provides 8 types of annotations - object and part segmentations, motion types, movable and interactable parts, motion parameters, connectivity, and object mass annotations. With its broad and high-quality annotations, the data provides the basis for holistic 3D scene understanding models. All data is provided in the USD format, allowing interoperability and easy integration with downstream tasks. We provide open access to our dataset, benchmark, and method's source code.