🤖 AI Summary
This work addresses the limitations of existing 3D generation methods, which typically produce static meshes that are ill-suited for physical simulation and embodied intelligence, and often suffer from error accumulation due to multi-stage pipelines. The authors propose the first unified multimodal large language model (MLLM) framework that jointly performs part-level semantic decomposition and kinematic structure prediction in an end-to-end manner, automatically converting a single input mesh into a high-quality articulated 3D asset. By incorporating a sparse 3D VQ-VAE, the method drastically reduces token count, enhancing scalability for modeling complex articulated objects. The approach achieves state-of-the-art performance on PartNet-Mobility and real-world AIGC datasets, enabling high-fidelity multi-part assembly and demonstrating successful deployment in robotic physical simulation.
📝 Abstract
High-quality articulated 3D assets are indispensable for embodied AI and physical simulation, yet 3D generation still focuses on static meshes, leaving a gap in "sim-ready" interactive objects. Most recent articulated object creation methods rely on multi-stage pipelines that accumulate errors across decoupled modules. Alternatively, unified MLLMs offer a single-stage path to joint static asset understanding and sim-ready asset generation. However dense voxel-based 3D tokenization yields long 3D token sequences and high memory overhead, limiting scalability to complex articulated objects. To address this, we propose SIMART, a unified MLLM framework that jointly performs part-level decomposition and kinematic prediction. By introducing a Sparse 3D VQ-VAE, SIMART reduces token counts by 70% vs. dense voxel tokens, enabling high-fidelity multi-part assemblies. SIMART achieves state-of-the-art performance on PartNet-Mobility and in-the-wild AIGC datasets, and enables physics-based robotic simulation.