π€ AI Summary
This work proposes a humanoid robot parkour method that integrates motion matching with a perception-driven multi-skill policy to enable autonomous execution of highly dynamic, long-horizon human-like maneuvers in complex environments. By performing motion matching in a feature space, atomic human skills are composed into smooth trajectories, while a deep vision-based multi-skill perception policy is trained through a combination of reinforcement learning expert policies and DAgger behavioral cloning. This enables real-time selection and seamless transitions among actions such as vaulting, climbing, and traversing obstacles. The approach achieves, for the first time, closed-loop environmental adaptation using only depth sensing and velocity commands. Validated on the Unitree G1 robot, it successfully performs continuous parkour over multiple obstacles, including climbs up to 1.25 meters (96% of the robotβs height), demonstrating robust locomotion, human-like expressiveness, and resilience to real-world disturbances.
π Abstract
While recent advances in humanoid locomotion have achieved stable walking on varied terrains, capturing the agility and adaptivity of highly dynamic human motions remains an open challenge. In particular, agile parkour in complex environments demands not only low-level robustness, but also human-like motion expressiveness, long-horizon skill composition, and perception-driven decision-making. In this paper, we present Perceptive Humanoid Parkour (PHP), a modular framework that enables humanoid robots to autonomously perform long-horizon, vision-based parkour across challenging obstacle courses. Our approach first leverages motion matching, formulated as nearest-neighbor search in a feature space, to compose retargeted atomic human skills into long-horizon kinematic trajectories. This framework enables the flexible composition and smooth transition of complex skill chains while preserving the elegance and fluidity of dynamic human motions. Next, we train motion-tracking reinforcement learning (RL) expert policies for these composed motions, and distill them into a single depth-based, multi-skill student policy, using a combination of DAgger and RL. Crucially, the combination of perception and skill composition enables autonomous, context-aware decision-making: using only onboard depth sensing and a discrete 2D velocity command, the robot selects and executes whether to step over, climb onto, vault or roll off obstacles of varying geometries and heights. We validate our framework with extensive real-world experiments on a Unitree G1 humanoid robot, demonstrating highly dynamic parkour skills such as climbing tall obstacles up to 1.25m (96% robot height), as well as long-horizon multi-obstacle traversal with closed-loop adaptation to real-time obstacle perturbations.