From Generated Human Videos to Physically Plausible Robot Trajectories

📅 2025-12-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of zero-shot imitation learning for humanoid robots to replicate noisy, deformed human motion videos generated by video diffusion models. We propose a two-stage framework: first, lifting the input video into a 4D human representation and applying motion retargeting to the robot’s morphology; second, deploying GenMimic—a physics-aware reinforcement learning policy that integrates symmetry regularization, keypoint-weighted tracking rewards, and dynamics constraints—to achieve robust 3D pose imitation. Our method requires no action labels, model fine-tuning, or real human motion data. In simulation, it significantly outperforms strong baselines; on the Unitree G1 robot, it enables plug-and-play, coherent, and stable motion tracking. Concurrently, we introduce GenMimicBench—the first benchmark explicitly designed for evaluating zero-shot generalization in robot imitation learning.

Technology Category

Application Category

📝 Abstract
Video generation models are rapidly improving in their ability to synthesize human actions in novel contexts, holding the potential to serve as high-level planners for contextual robot control. To realize this potential, a key research question remains open: how can a humanoid execute the human actions from generated videos in a zero-shot manner? This challenge arises because generated videos are often noisy and exhibit morphological distortions that make direct imitation difficult compared to real video. To address this, we introduce a two-stage pipeline. First, we lift video pixels into a 4D human representation and then retarget to the humanoid morphology. Second, we propose GenMimic-a physics-aware reinforcement learning policy conditioned on 3D keypoints, and trained with symmetry regularization and keypoint-weighted tracking rewards. As a result, GenMimic can mimic human actions from noisy, generated videos. We curate GenMimicBench, a synthetic human-motion dataset generated using two video generation models across a spectrum of actions and contexts, establishing a benchmark for assessing zero-shot generalization and policy robustness. Extensive experiments demonstrate improvements over strong baselines in simulation and confirm coherent, physically stable motion tracking on a Unitree G1 humanoid robot without fine-tuning. This work offers a promising path to realizing the potential of video generation models as high-level policies for robot control.
Problem

Research questions and friction points this paper is trying to address.

Convert generated human videos to robot trajectories
Enable zero-shot imitation from noisy video data
Ensure physically plausible motion for humanoid robots
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lift video pixels to 4D human representation and retarget to humanoid morphology
Use physics-aware reinforcement learning policy conditioned on 3D keypoints
Train policy with symmetry regularization and keypoint-weighted tracking rewards
🔎 Similar Papers
No similar papers found.