π€ AI Summary
Existing embodied world models struggle to accurately map low-level joint actions to physically consistent visual motion predictions, particularly exhibiting spatial distortion and dynamical inconsistency in complex bimanual interaction scenarios. This paper proposes a multi-view trajectory video control framework: camera intrinsic and extrinsic parameters are calibrated to enable precise generation of multi-view trajectory videos from Cartesian-space trajectories; a multimodal large model and a fingertip-aware video segmentation model are integrated to construct an automated, quantifiable evaluation pipeline for physical interaction consistency. Compared to single-view approaches, our framework significantly mitigates spatial information loss, enhancing motion prediction accuracy and fidelity in contact dynamics modeling. Experiments demonstrate high-precision motion control and strong physical consistency across multi-stage bimanual manipulation tasks. The proposed method establishes a novel paradigm for visionβmotor co-modeling in embodied agents.
π Abstract
Embodied world models aim to predict and interact with the physical world through visual observations and actions. However, existing models struggle to accurately translate low-level actions (e.g., joint positions) into precise robotic movements in predicted frames, leading to inconsistencies with real-world physical interactions. To address these limitations, we propose MTV-World, an embodied world model that introduces Multi-view Trajectory-Video control for precise visuomotor prediction. Specifically, instead of directly using low-level actions for control, we employ trajectory videos obtained through camera intrinsic and extrinsic parameters and Cartesian-space transformation as control signals. However, projecting 3D raw actions onto 2D images inevitably causes a loss of spatial information, making a single view insufficient for accurate interaction modeling. To overcome this, we introduce a multi-view framework that compensates for spatial information loss and ensures high-consistency with physical world. MTV-World forecasts future frames based on multi-view trajectory videos as input and conditioning on an initial frame per view. Furthermore, to systematically evaluate both robotic motion precision and object interaction accuracy, we develop an auto-evaluation pipeline leveraging multimodal large models and referring video object segmentation models. To measure spatial consistency, we formulate it as an object location matching problem and adopt the Jaccard Index as the evaluation metric. Extensive experiments demonstrate that MTV-World achieves precise control execution and accurate physical interaction modeling in complex dual-arm scenarios.