🤖 AI Summary
Existing world models struggle to simultaneously achieve geometrically consistent multi-view 4D dynamic prediction and executable action generation, with inverse dynamics inference often being ill-posed. This work proposes an embodied 4D world model that, given only a single-view RGB-D input, generates future RGB-D sequences that are geometrically consistent across arbitrary viewpoints. Cross-view and cross-modal feature fusion ensures RGB-D consistency, while trajectory-level latent optimization at test time, combined with a residual inverse dynamics model, translates future predictions into executable actions. Experiments on three datasets demonstrate that the proposed method significantly outperforms existing approaches in both 4D scene generation and downstream manipulation tasks, validating the effectiveness of its core design components.
📝 Abstract
World-model-based imagine-then-act becomes a promising paradigm for robotic manipulation, yet existing approaches typically support either purely image-based forecasting or reasoning over partial 3D geometry, limiting their ability to predict complete 4D scene dynamics. This work proposes a novel embodied 4D world model that enables geometrically consistent, arbitrary-view RGBD generation: given only a single-view RGBD observation as input, the model imagines the remaining viewpoints, which can then be back-projected and fused to assemble a more complete 3D structure across time. To efficiently learn the multi-view, cross-modality generation, we explicitly design cross-view and cross-modality feature fusion that jointly encourage consistency between RGB and depth and enforce geometric alignment across views. Beyond prediction, converting generated futures into actions is often handled by inverse dynamics, which is ill-posed because multiple actions can explain the same transition. We address this with a test-time action optimization strategy that backpropagates through the generative model to infer a trajectory-level latent best matching the predicted future, and a residual inverse dynamics model that turns this trajectory prior into accurate executable actions. Experiments on three datasets demonstrate strong performance on both 4D scene generation and downstream manipulation, and ablations provide practical insights into the key design choices.