🤖 AI Summary
This work proposes a whole-body multi-contact locomotion system that integrates physics-driven keyframe animation with reinforcement learning to enable robust navigation of humanoid robots in extreme terrains—such as under tables, over low walls, and up steep stairs—where conventional leg-centric gaits struggle with stability and traversability. By combining a hierarchical control architecture with a vision-based skill planner, the system achieves cross-terrain generalization, fault recovery, and robust adaptation to varying obstacle dimensions and sequences. Experimental results demonstrate significant improvements in both mobility and stability for humanoid robots operating in complex, constrained environments.
📝 Abstract
Most locomotion methods for humanoid robots focus on leg-based gaits, yet natural bipeds frequently rely on hands, knees, and elbows to establish additional contacts for stability and support in complex environments. This paper introduces Locomotion Beyond Feet, a comprehensive system for whole-body humanoid locomotion across extremely challenging terrains, including low-clearance spaces under chairs, knee-high walls, knee-high platforms, and steep ascending and descending stairs. Our approach addresses two key challenges: contact-rich motion planning and generalization across diverse terrains. To this end, we combine physics-grounded keyframe animation with reinforcement learning. Keyframes encode human knowledge of motor skills, are embodiment-specific, and can be readily validated in simulation or on hardware, while reinforcement learning transforms these references into robust, physically accurate motions. We further employ a hierarchical framework consisting of terrain-specific motion-tracking policies, failure recovery mechanisms, and a vision-based skill planner. Real-world experiments demonstrate that Locomotion Beyond Feet achieves robust whole-body locomotion and generalizes across obstacle sizes, obstacle instances, and terrain sequences.