Visual Imitation Enables Contextual Humanoid Control

📅 2025-05-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of enabling humanoid robots to generalize environment-dependent skills—such as stair climbing and sit-to-stand transitions—in real-world settings. We propose the first real-to-sim-to-real end-to-end embodied learning framework. Given only a single human demonstration video, our method jointly reconstructs the scene and human motion, models neural motion priors, learns a root-joint global-command-conditioned whole-body control policy, and transfers control across domains—yielding a unified, deployable policy for real robots. It requires no manual annotations, simulation pretraining, or task-specific engineering. Experiments demonstrate robust execution of complex dynamic skills across unseen stairs, chairs, and benches—achieving, for the first time, single-policy, multi-scene, environment-conditioned end-to-end embodied control.

Technology Category

Application Category

📝 Abstract
How can we teach humanoids to climb staircases and sit on chairs using the surrounding environment context? Arguably, the simplest way is to just show them-casually capture a human motion video and feed it to humanoids. We introduce VIDEOMIMIC, a real-to-sim-to-real pipeline that mines everyday videos, jointly reconstructs the humans and the environment, and produces whole-body control policies for humanoid robots that perform the corresponding skills. We demonstrate the results of our pipeline on real humanoid robots, showing robust, repeatable contextual control such as staircase ascents and descents, sitting and standing from chairs and benches, as well as other dynamic whole-body skills-all from a single policy, conditioned on the environment and global root commands. VIDEOMIMIC offers a scalable path towards teaching humanoids to operate in diverse real-world environments.
Problem

Research questions and friction points this paper is trying to address.

Teaching humanoids contextual control from videos
Reconstructing humans and environment for robot policies
Enabling diverse real-world skills via single policy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mines everyday videos for humanoid control
Reconstructs humans and environment jointly
Produces whole-body control policies from video
🔎 Similar Papers
No similar papers found.