🤖 AI Summary
The embodiment gap—differences in physical morphology between humans and robots—induces distributional shift in imitation learning: humans actively move their heads to perform task-driven visual search and eye-hand coordination, whereas static robot perception systems cannot replicate this dynamic, active sensing behavior. Method: We propose EgoMI, the first framework to jointly capture first-person human head motion and bimanual manipulation trajectories, incorporating a memory-augmented policy network to model historical observations and explicitly learn a coordinated perception-action mapping guided by active vision. Action retargeting adapts learned policies to robotic bimanual arms and pan-tilt cameras. Results: Experiments on a bimanual robot platform demonstrate that explicitly modeling head motion significantly improves task success rate and robustness. EgoMI establishes a transferable paradigm for cross-embodiment imitation learning, unifying active perception with closed-loop control.
📝 Abstract
Imitation learning from human demonstrations offers a promising approach for robot skill acquisition, but egocentric human data introduces fundamental challenges due to the embodiment gap. During manipulation, humans actively coordinate head and hand movements, continuously reposition their viewpoint and use pre-action visual fixation search strategies to locate relevant objects. These behaviors create dynamic, task-driven head motions that static robot sensing systems cannot replicate, leading to a significant distribution shift that degrades policy performance. We present EgoMI (Egocentric Manipulation Interface), a framework that captures synchronized end-effector and active head trajectories during manipulation tasks, resulting in data that can be retargeted to compatible semi-humanoid robot embodiments. To handle rapid and wide-spanning head viewpoint changes, we introduce a memory-augmented policy that selectively incorporates historical observations. We evaluate our approach on a bimanual robot equipped with an actuated camera head and find that policies with explicit head-motion modeling consistently outperform baseline methods. Results suggest that coordinated hand-eye learning with EgoMI effectively bridges the human-robot embodiment gap for robust imitation learning on semi-humanoid embodiments. Project page: https://egocentric-manipulation-interface.github.io