🤖 AI Summary
Existing 2D video world models struggle to capture 3D geometry and spatial relationships, limiting robotic manipulation performance. This paper proposes an interactive multimodal world model that jointly generates RGB frames, depth maps, and robot-arm segmentation masks—enabling physically consistent 3D scene modeling and action-conditioned future-frame prediction. Our key contributions are: (1) MMTokenizer, a geometrically aware tokenizer that unifies multimodal signals into compact, geometry-sensitive tokens; (2) an extended VideoGPT architecture incorporating dedicated depth estimation and instance segmentation heads for autoregressive multi-step prediction; and (3) support for large-scale pretraining and transfer. Evaluated on model-predictive control and real-world imitation learning tasks, our approach significantly improves visual reconstruction fidelity and policy generalization, demonstrating the efficacy of multimodal world models in joint perception–action modeling.
📝 Abstract
Learned world models hold significant potential for robotic manipulation, as they can serve as simulator for real-world interactions. While extensive progress has been made in 2D video-based world models, these approaches often lack geometric and spatial reasoning, which is essential for capturing the physical structure of the 3D world. To address this limitation, we introduce iMoWM, a novel interactive world model designed to generate color images, depth maps, and robot arm masks in an autoregressive manner conditioned on actions. To overcome the high computational cost associated with three-dimensional information, we propose MMTokenizer, which unifies multi-modal inputs into a compact token representation. This design enables iMoWM to leverage large-scale pretrained VideoGPT models while maintaining high efficiency and incorporating richer physical information. With its multi-modal representation, iMoWM not only improves the visual quality of future predictions but also serves as an effective simulator for model-based reinforcement learning (MBRL) and facilitates real-world imitation learning. Extensive experiments demonstrate the superiority of iMoWM across these tasks, showcasing the advantages of multi-modal world modeling for robotic manipulation. Homepage: https://xingyoujun.github.io/imowm/