🤖 AI Summary
Robust bipedal locomotion for humanoid robots in dynamic environments faces challenges including high modeling complexity, significant sim-to-real discrepancy, and inherent trade-offs between safety and task performance. This paper proposes a hierarchical whole-body control framework that formulates policy learning as a robust optimization problem, uniquely integrating human behavioral priors with rigid-body dynamics constraints to establish an adaptive trade-off mechanism. It explicitly models autonomous recovery capabilities under safety-critical scenarios, thereby breaking the conventional conservatism–task-success trade-off. The method unifies a hierarchical policy architecture, physics-based simulation-enhanced reinforcement learning, embedded kinematic and dynamic constraints, and cross-domain generalization training. Evaluated in both simulation and real-world deployments, the approach achieves a 42% improvement in fault recovery rate and a 31% increase in task completion rate over state-of-the-art methods, demonstrating superior performance across complex terrains, multi-configuration robots, and diverse gaits.
📝 Abstract
Humanoid robots, capable of assuming human roles in various workplaces, have become essential to the advancement of embodied intelligence. However, as robots with complex physical structures, learning a control model that can operate robustly across diverse environments remains inherently challenging, particularly under the discrepancies between training and deployment environments. In this study, we propose HWC-Loco, a robust whole-body control algorithm tailored for humanoid locomotion tasks. By reformulating policy learning as a robust optimization problem, HWC-Loco explicitly learns to recover from safety-critical scenarios. While prioritizing safety guarantees, overly conservative behavior can compromise the robot's ability to complete the given tasks. To tackle this challenge, HWC-Loco leverages a hierarchical policy for robust control. This policy can dynamically resolve the trade-off between goal-tracking and safety recovery, guided by human behavior norms and dynamic constraints. To evaluate the performance of HWC-Loco, we conduct extensive comparisons against state-of-the-art humanoid control models, demonstrating HWC-Loco's superior performance across diverse terrains, robot structures, and locomotion tasks under both simulated and real-world environments.