๐ค AI Summary
This work addresses the limitation of single-image methods in modeling kinematic relationships for 3D articulated object reconstruction. We propose the first end-to-end framework that generates controllable, jointed 3D objects from paired static and motion-conditioned images. Our method introduces a novel dual-image conditional diffusion model that jointly infers part layout, joint types/parameters, and topological connectivity. To explicitly reason about part connectivity, we design a Chain-of-Thought graph reasoning module. Furthermore, we integrate URDF-driven physics-aware generation with multimodal alignment across images, URDFs, and text. To support training and evaluation, we introduce PM-Xโthe first large-scale dataset of complex movable objectsโand propose LEGO-Art, an automated synthetic augmentation pipeline. On PartNet-Mobility and PM-X, our approach achieves state-of-the-art performance in state reconstruction, joint parameter estimation, and topology accuracy, demonstrating significantly improved generalization.
๐ Abstract
We present DIPO, a novel framework for the controllable generation of articulated 3D objects from a pair of images: one depicting the object in a resting state and the other in an articulated state. Compared to the single-image approach, our dual-image input imposes only a modest overhead for data collection, but at the same time provides important motion information, which is a reliable guide for predicting kinematic relationships between parts. Specifically, we propose a dual-image diffusion model that captures relationships between the image pair to generate part layouts and joint parameters. In addition, we introduce a Chain-of-Thought (CoT) based graph reasoner that explicitly infers part connectivity relationships. To further improve robustness and generalization on complex articulated objects, we develop a fully automated dataset expansion pipeline, name LEGO-Art, that enriches the diversity and complexity of PartNet-Mobility dataset. We propose PM-X, a large-scale dataset of complex articulated 3D objects, accompanied by rendered images, URDF annotations, and textual descriptions. Extensive experiments demonstrate that DIPO significantly outperforms existing baselines in both the resting state and the articulated state, while the proposed PM-X dataset further enhances generalization to diverse and structurally complex articulated objects. Our code and dataset will be released to the community upon publication.