🤖 AI Summary
Existing imitation learning methods struggle to replicate highly dynamic human skills—such as martial arts or dance—due to their reliance on smooth, low-velocity motion assumptions, rendering them inadequate for handling strong impacts and rapid directional changes. To address this, we propose a physics-constrained whole-body control framework. First, we design a physics-prioritized motion retargeting pipeline that safely maps human motions onto robot configurations. Second, we introduce a two-tier optimization mechanism: an upper layer dynamically adjusts tracking tolerance based on real-time error (adaptive curriculum), while a novel asymmetric Actor-Critic architecture in the lower layer enables high-dynamics policy training, integrated with adaptive reward shaping and real-time whole-body dynamics control. Evaluated on the Unitree G1 humanoid platform, our method significantly reduces tracking error and robustly reproduces complex, high-dynamic motions—outperforming state-of-the-art approaches.
📝 Abstract
Humanoid robots are promising to acquire various skills by imitating human behaviors. However, existing algorithms are only capable of tracking smooth, low-speed human motions, even with delicate reward and curriculum design. This paper presents a physics-based humanoid control framework, aiming to master highly-dynamic human behaviors such as Kungfu and dancing through multi-steps motion processing and adaptive motion tracking. For motion processing, we design a pipeline to extract, filter out, correct, and retarget motions, while ensuring compliance with physical constraints to the maximum extent. For motion imitation, we formulate a bi-level optimization problem to dynamically adjust the tracking accuracy tolerance based on the current tracking error, creating an adaptive curriculum mechanism. We further construct an asymmetric actor-critic framework for policy training. In experiments, we train whole-body control policies to imitate a set of highly-dynamic motions. Our method achieves significantly lower tracking errors than existing approaches and is successfully deployed on the Unitree G1 robot, demonstrating stable and expressive behaviors. The project page is https://kungfu-bot.github.io.