🤖 AI Summary
This work addresses the challenge of achieving robust autonomous locomotion for humanoid robots on complex terrains, where existing Mixture-of-Experts (MoE) models suffer from insufficient specialization due to convergent expert activation. To overcome this limitation, the authors propose CMoE, a novel framework that integrates contrastive learning into a single-stage reinforcement learning architecture with MoE. By maximizing the consistency of expert activations within the same terrain type and minimizing their similarity across different terrains, CMoE encourages terrain-specific expert specialization. The resulting end-to-end locomotion policy enables the Unitree G1 robot to successfully traverse 20 cm high steps and 80 cm wide gaps, demonstrating robust and natural gait patterns on mixed terrains and significantly outperforming current state-of-the-art methods.
📝 Abstract
For effective deployment in real-world environments, humanoid robots must autonomously navigate a diverse range of complex terrains with abrupt transitions. While the Vanilla mixture of experts (MoE) framework is theoretically capable of modeling diverse terrain features, in practice, the gating network exhibits nearly uniform expert activations across different terrains, weakening the expert specialization and limiting the model's expressive power. To address this limitation, we introduce CMoE, a novel single-stage reinforcement learning framework that integrates contrastive learning to refine expert activation distributions. By imposing contrastive constraints, CMoE maximizes the consistency of expert activations within the same terrain while minimizing their similarity across different terrains, thereby encouraging experts to specialize in distinct terrain types. We validated our approach on the Unitree G1 humanoid robot through a series of challenging experiments. Results demonstrate that CMoE enables the robot to traverse continuous steps up to 20 cm high and gaps up to 80 cm wide, while achieving robust and natural gait across diverse mixed terrains, surpassing the limits of existing methods. To support further research and foster community development, we release our code publicly.