Retargeting Matters: General Motion Retargeting for Humanoid Motion Tracking

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Humanoid robot motion tracking suffers from embodiment gaps between human and robot morphologies, leading to retargeting artifacts—including foot sliding, self-penetration, and physically implausible motions. Existing approaches rely on post-hoc reinforcement learning policies requiring extensive reward engineering and domain randomization. This paper proposes Generalized Motion Retargeting (GMR), a geometry-aware and kinematically consistent optimization framework that explicitly suppresses such artifacts. Evaluated in isolation within the BeyondMimic benchmark to decouple retargeting quality from downstream control, GMR achieves superior tracking accuracy and source-motion fidelity over leading open-source methods on the LAFAN1 subset. It significantly improves success rates for dynamic and long-sequence motions, approaching the performance of closed-source baselines. Crucially, GMR eliminates the need for intricate reward tuning, enhancing policy generalizability and robustness across diverse morphologies and motion styles.

Technology Category

Application Category

📝 Abstract
Humanoid motion tracking policies are central to building teleoperation pipelines and hierarchical controllers, yet they face a fundamental challenge: the embodiment gap between humans and humanoid robots. Current approaches address this gap by retargeting human motion data to humanoid embodiments and then training reinforcement learning (RL) policies to imitate these reference trajectories. However, artifacts introduced during retargeting, such as foot sliding, self-penetration, and physically infeasible motion are often left in the reference trajectories for the RL policy to correct. While prior work has demonstrated motion tracking abilities, they often require extensive reward engineering and domain randomization to succeed. In this paper, we systematically evaluate how retargeting quality affects policy performance when excessive reward tuning is suppressed. To address issues that we identify with existing retargeting methods, we propose a new retargeting method, General Motion Retargeting (GMR). We evaluate GMR alongside two open-source retargeters, PHC and ProtoMotions, as well as with a high-quality closed-source dataset from Unitree. Using BeyondMimic for policy training, we isolate retargeting effects without reward tuning. Our experiments on a diverse subset of the LAFAN1 dataset reveal that while most motions can be tracked, artifacts in retargeted data significantly reduce policy robustness, particularly for dynamic or long sequences. GMR consistently outperforms existing open-source methods in both tracking performance and faithfulness to the source motion, achieving perceptual fidelity and policy success rates close to the closed-source baseline. Website: https://jaraujo98.github.io/retargeting_matters. Code: https://github.com/YanjieZe/GMR.
Problem

Research questions and friction points this paper is trying to address.

Addressing embodiment gap between humans and humanoid robots
Reducing artifacts in retargeted motion data for RL policies
Improving policy robustness without extensive reward engineering
Innovation

Methods, ideas, or system contributions that make the work stand out.

General Motion Retargeting method reduces motion artifacts
Evaluates retargeting quality impact without reward tuning
Outperforms open-source methods in tracking and fidelity
🔎 Similar Papers
No similar papers found.
J
Joao Pedro Araujo
Department of Computer Science, School of Engineering, Stanford University, 353 Jane Stanford Way, Stanford, CA 94305, United States
Yanjie Ze
Yanjie Ze
Stanford University
RoboticsEmbodied AIHumanoid Robots
P
Pei Xu
Department of Computer Science, School of Engineering, Stanford University, 353 Jane Stanford Way, Stanford, CA 94305, United States
J
Jiajun Wu
Department of Computer Science, School of Engineering, Stanford University, 353 Jane Stanford Way, Stanford, CA 94305, United States
C. Karen Liu
C. Karen Liu
Professor of Computer Science, Stanford University
Computer GraphicsRobotics.