🤖 AI Summary
This work addresses the reliance on manual annotation and prior knowledge in modeling unknown robots. We propose the first unsupervised, end-to-end, point-cloud-driven framework for automatic URDF generation. Given multi-frame unannotated point clouds, our method jointly performs part segmentation, hierarchical topology inference, and joint parameter estimation via motion-consistency-driven clustering, registration, and 6-DoF rigid-body tracking—ultimately producing executable, Gazebo/PyBullet-compatible URDF models. Key contributions are: (1) full automation without human intervention or robot-specific priors; and (2) the first motion-guided, unsupervised mechanism for topology inference. Evaluated on both synthetic and real-world scanned data, our approach achieves a 12.3% improvement in registration accuracy and an 18.7% gain in topology identification accuracy, significantly reducing modeling effort and enhancing feasibility of simulation deployment.
📝 Abstract
Robot description models are essential for simulation and control, yet their creation often requires significant manual effort. To streamline this modeling process, we introduce AutoURDF, an unsupervised approach for constructing description files for unseen robots from point cloud frames. Our method leverages a cluster-based point cloud registration model that tracks the 6-DoF transformations of point clusters. Through analyzing cluster movements, we hierarchically address the following challenges: (1) moving part segmentation, (2) body topology inference, and (3) joint parameter estimation. The complete pipeline produces robot description files that are fully compatible with existing simulators. We validate our method across a variety of robots, using both synthetic and real-world scan data. Results indicate that our approach outperforms previous methods in registration and body topology estimation accuracy, offering a scalable solution for automated robot modeling.