π€ AI Summary
To address the limited generalization capability of robotic manipulation policies caused by scarce robot hardware data, this paper proposes a vision-language-action (VLA) model training paradigm grounded in first-person human demonstration videos. Methodologically, we pretrain a VLA model on large-scale egocentric video corpora, then map human manipulations to robot-executable actions via inverse kinematics and cross-humanβrobot action retargeting, followed by fine-tuning with a small set of real-robot demonstrations. Our key contribution is the first end-to-end, cross-modal transfer from monocular first-person video to embodied robotic policies. Experiments on the Isaac humanoid manipulation benchmark demonstrate substantial improvements over existing baselines, validating the efficacy of human video data in enhancing policy diversity, scene adaptability, and scalability.
π Abstract
Real robot data collection for imitation learning has led to significant advancements in robotic manipulation. However, the requirement for robot hardware in the process fundamentally constrains the scale of the data. In this paper, we explore training Vision-Language-Action (VLA) models using egocentric human videos. The benefit of using human videos is not only for their scale but more importantly for the richness of scenes and tasks. With a VLA trained on human video that predicts human wrist and hand actions, we can perform Inverse Kinematics and retargeting to convert the human actions to robot actions. We fine-tune the model using a few robot manipulation demonstrations to obtain the robot policy, namely EgoVLA. We propose a simulation benchmark called Isaac Humanoid Manipulation Benchmark, where we design diverse bimanual manipulation tasks with demonstrations. We fine-tune and evaluate EgoVLA with Isaac Humanoid Manipulation Benchmark and show significant improvements over baselines and ablate the importance of human data. Videos can be found on our website: https://rchalyang.github.io/EgoVLA