WARPED: Wrist-Aligned Rendering for Robot Policy Learning from Egocentric Human Demonstrations

πŸ“… 2026-04-12
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge of efficiently training robot visuomotor policies using only monocular RGB videos, a setting where existing imitation learning methods often fall short due to their reliance on multi-view cameras, depth sensors, or custom hardware. The authors propose a novel approach that, for the first time, generates high-quality egocentric wrist-view observations solely from ordinary first-person monocular RGB footage. Their method leverages a vision foundation model to initialize the interaction scene, integrates hand–object tracking with trajectory retargeting, and employs Gaussian splatting to synthesize wrist-centric views for policy training. Evaluated on five tabletop manipulation tasks, the approach achieves success rates comparable to those obtained with teleoperated demonstration data while reducing data collection costs by 5–8Γ— and substantially diminishing dependence on specialized hardware.

Technology Category

Application Category

πŸ“ Abstract
Recent advancements in learning from human demonstration have shown promising results in addressing the scalability and high cost of data collection required to train robust visuomotor policies. However, existing approaches are often constrained by a reliance on multiview camera setups, depth sensors, or custom hardware and are typically limited to policy execution from third-person or egocentric cameras. In this paper, we present WARPED, a framework designed to synthesize realistic wrist-view observations from human demonstration videos to facilitate the training of visuomotor policies using only monocular RGB data. With data collected from an egocentric RGB camera, our system leverages vision foundation models to initialize the interactive scene. A hand-object interaction pipeline is then employed to track the hand and manipulated object and retarget the trajectories to a robotic end-effector. Lastly, photo-realistic wrist-view observations are synthesized via Gaussian Splatting to directly train a robotic policy. We demonstrate that WARPED achieves success rates comparable to policies trained on teleoperated demonstration data for five tabletop manipulation tasks, while requiring 5-8x less data collection time.
Problem

Research questions and friction points this paper is trying to address.

visuomotor policy
egocentric demonstration
wrist-view synthesis
robot learning
monocular RGB
Innovation

Methods, ideas, or system contributions that make the work stand out.

Wrist-Aligned Rendering
Egocentric Demonstration
Visuomotor Policy Learning
Gaussian Splatting
Hand-Object Interaction
πŸ”Ž Similar Papers
No similar papers found.