π€ AI Summary
This work addresses the challenge of efficiently training robot visuomotor policies using only monocular RGB videos, a setting where existing imitation learning methods often fall short due to their reliance on multi-view cameras, depth sensors, or custom hardware. The authors propose a novel approach that, for the first time, generates high-quality egocentric wrist-view observations solely from ordinary first-person monocular RGB footage. Their method leverages a vision foundation model to initialize the interaction scene, integrates handβobject tracking with trajectory retargeting, and employs Gaussian splatting to synthesize wrist-centric views for policy training. Evaluated on five tabletop manipulation tasks, the approach achieves success rates comparable to those obtained with teleoperated demonstration data while reducing data collection costs by 5β8Γ and substantially diminishing dependence on specialized hardware.
π Abstract
Recent advancements in learning from human demonstration have shown promising results in addressing the scalability and high cost of data collection required to train robust visuomotor policies. However, existing approaches are often constrained by a reliance on multiview camera setups, depth sensors, or custom hardware and are typically limited to policy execution from third-person or egocentric cameras. In this paper, we present WARPED, a framework designed to synthesize realistic wrist-view observations from human demonstration videos to facilitate the training of visuomotor policies using only monocular RGB data. With data collected from an egocentric RGB camera, our system leverages vision foundation models to initialize the interactive scene. A hand-object interaction pipeline is then employed to track the hand and manipulated object and retarget the trajectories to a robotic end-effector. Lastly, photo-realistic wrist-view observations are synthesized via Gaussian Splatting to directly train a robotic policy. We demonstrate that WARPED achieves success rates comparable to policies trained on teleoperated demonstration data for five tabletop manipulation tasks, while requiring 5-8x less data collection time.