🤖 AI Summary
This work addresses the challenge of translating pixel-level video plans generated by world models into physically executable actions for robotic control. To bridge this gap, the authors propose the Tool-Centric Inverse Dynamics Model (TC-IDM), which introduces a tool-centered intermediate representation. TC-IDM extracts tool point cloud trajectories from generated videos and maps them to 6-DoF end-effector motions and control signals within a “plan-to-translate” architecture. By employing decoupled action heads, TC-IDM supports multiple end-effector types and significantly enhances viewpoint invariance and zero-shot generalization, particularly excelling in long-horizon and deformable object manipulation tasks. Real-world robot experiments demonstrate an average success rate of 61.11%—including 77.7% on simple tasks and 38.46% on zero-shot deformable tasks—substantially outperforming existing end-to-end vision-language-action (VLA) and inverse dynamics baselines.
📝 Abstract
The vision-language-action (VLA) paradigm has enabled powerful robotic control by leveraging vision-language models, but its reliance on large-scale, high-quality robot data limits its generalization. Generative world models offer a promising alternative for general-purpose embodied AI, yet a critical gap remains between their pixel-level plans and physically executable actions. To this end, we propose the Tool-Centric Inverse Dynamics Model (TC-IDM). By focusing on the tool's imagined trajectory as synthesized by the world model, TC-IDM establishes a robust intermediate representation that bridges the gap between visual planning and physical control. TC-IDM extracts the tool's point cloud trajectories via segmentation and 3D motion estimation from generated videos. Considering diverse tool attributes, our architecture employs decoupled action heads to project these planned trajectories into 6-DoF end-effector motions and corresponding control signals. This plan-and-translate paradigm not only supports a wide range of end-effectors but also significantly improves viewpoint invariance. Furthermore, it exhibits strong generalization capabilities across long-horizon and out-of-distribution tasks, including interacting with deformable objects. In real-world evaluations, the world model with TC-IDM achieves an average success rate of 61.11 percent, with 77.7 percent on simple tasks and 38.46 percent on zero-shot deformable object tasks. It substantially outperforms end-to-end VLA-style baselines and other inverse dynamics models.