🤖 AI Summary
Transferring human video demonstrations to robotic policies under few-shot settings remains challenging due to domain gaps in dynamics and inconsistent action understanding across morphologically distinct agents.
Method: This paper proposes motion tracks—short-term 2D motion trajectories on the image plane—as a unified, morphology-agnostic action representation, enabling a shared action space between humans and robots. Our approach integrates visual motion modeling, multi-view geometric synthesis, and dual-view 6DoF trajectory recovery to realize an end-to-end differentiable imitation policy (MT-π), supporting zero-shot generalization from human videos alone and few-shot cross-modal alignment.
Results: Evaluated on four real-world household tasks, MT-π achieves an average success rate of 86.5%, outperforming state-of-the-art methods that either ignore human data or omit this action-space alignment by +40%. Moreover, it successfully generalizes to entirely novel scenes observed only in human demonstration videos.
📝 Abstract
Teaching robots to autonomously complete everyday tasks remains a challenge. Imitation Learning (IL) is a powerful approach that imbues robots with skills via demonstrations, but is limited by the labor-intensive process of collecting teleoperated robot data. Human videos offer a scalable alternative, but it remains difficult to directly train IL policies from them due to the lack of robot action labels. To address this, we propose to represent actions as short-horizon 2D trajectories on an image. These actions, or motion tracks, capture the predicted direction of motion for either human hands or robot end-effectors. We instantiate an IL policy called Motion Track Policy (MT-pi) which receives image observations and outputs motion tracks as actions. By leveraging this unified, cross-embodiment action space, MT-pi completes tasks with high success given just minutes of human video and limited additional robot demonstrations. At test time, we predict motion tracks from two camera views, recovering 6DoF trajectories via multi-view synthesis. MT-pi achieves an average success rate of 86.5% across 4 real-world tasks, outperforming state-of-the-art IL baselines which do not leverage human data or our action space by 40%, and generalizes to scenarios seen only in human videos. Code and videos are available on our website https://portal-cornell.github.io/motion_track_policy/.