๐ค AI Summary
This work addresses the challenge of directly transferring text-generated human motions to humanoid robots, which often fails due to kinematic and dynamic mismatches, erroneous contact transitions, and the absence of realistic physical constraints. To overcome this, the authors propose a unified latent-space framework that forgoes explicit motion retargeting by tightly coupling text-guided motion generation with whole-body control through a bidirectional coupling mechanism under physical constraints. The core innovation lies in the Physically Plausible Optimization (PP-Opt) module, which integrates diffusion-based motion generation, teacherโstudent distillation, reward-driven physics optimization, and latent-conditioned control to establish a self-improving loop between generation and execution. After end-to-end training in IsaacLab and MuJoCo, the method significantly improves tracking accuracy and task success rates on the Unitree G1 humanoid robot, outperforming conventional retargeting approaches and achieving notable advances in both stability and precision.
๐ Abstract
While generative models have become effective at producing human-like motions from text, transferring these motions to humanoid robots for physical execution remains challenging. Existing pipelines are often limited by retargeting, where kinematic quality is undermined by physical infeasibility, contact-transition errors, and the high cost of real-world dynamical data. We present a unified latent-driven framework that bridges natural language and whole-body humanoid locomotion through a retarget-free, physics-optimized pipeline. Rather than treating generation and control as separate stages, our key insight is to couple them bidirectionally under physical constraints.We introduce a Physical Plausibility Optimization (PP-Opt) module as the coupling interface. In the forward direction, PP-Opt refines a teacher-student distillation policy with a plausibility-centric reward to suppress artifacts such as floating, skating, and penetration. In the backward direction, it converts reward-optimized simulation rollouts into high-quality explicit motion data, which is used to fine-tune the motion generator toward a more physically plausible latent distribution. This bidirectional design forms a self-improving cycle: the generator learns a physically grounded latent space, while the controller learns to execute latent-conditioned behaviors with dynamical integrity.Extensive experiments on the Unitree G1 humanoid show that our bidirectional optimization improves tracking accuracy and success rates. Across IsaacLab and MuJoCo, the implicit latent-driven pipeline consistently outperforms conventional explicit retargeting baselines in both precision and stability. By coupling diffusion-based motion generation with physical plausibility optimization, our framework provides a practical path toward deployable text-guided humanoid intelligence.