🤖 AI Summary
Existing large-scale 3D datasets lack diverse, interactive, articulable objects and suffer from high manual annotation costs. To address this, we introduce the novel “Static-to-Operable” (S2O) generation task: given a static 3D model, jointly detect movable parts, regress joint parameters, model motion trajectories, and complete internal geometry via implicit fields to produce physically drivable articulated 3D objects. We construct the first large-scale S2O benchmark dataset and propose a unified end-to-end framework integrating part segmentation, differentiable joint optimization, and geometric prior constraints. Experiments on our dataset show that the best-performing method achieves a 68.3% simulation success rate for opening/closing motions—significantly outperforming all baselines. This work establishes a new paradigm for robotic manipulation and embodied AI by enabling scalable, physics-aware articulation synthesis from static geometry.
📝 Abstract
Despite much progress in large 3D datasets there are currently few interactive 3D object datasets, and their scale is limited due to the manual effort required in their construction. We introduce the static to openable (S2O) task which creates interactive articulated 3D objects from static counterparts through openable part detection, motion prediction, and interior geometry completion. We formulate a unified framework to tackle this task, and curate a challenging dataset of openable 3D objects that serves as a test bed for systematic evaluation. Our experiments benchmark methods from prior work, extended and improved methods, and simple yet effective heuristics for the S2O task. We find that turning static 3D objects into interactively openable counterparts is possible but that all methods struggle to generalize to realistic settings of the task, and we highlight promising future work directions. Our work enables efficient creation of interactive 3D objects for robotic manipulation and embodied AI tasks.