🤖 AI Summary
This work addresses the instability in bipedal walking caused by dynamic modeling errors and sensor noise by proposing a hybrid approach that combines a model-based controller with residual reinforcement learning. The method leverages an “oracle” policy—derived from accurate system dynamics—as a supervisory signal to guide the learning of a residual policy, thereby circumventing the need for intricate reward design and efficiently compensating for unmodeled effects. Integrating Divergent Component of Motion (DCM) trajectory planning, whole-body torque control, domain randomization, and a model-supervised loss function, the approach significantly enhances walking robustness and generalization under diverse disturbances. This framework offers a scalable solution for sim-to-real transfer, effectively bridging the gap between simulation and real-world deployment.
📝 Abstract
We propose a control framework that integrates model-based bipedal locomotion with residual reinforcement learning (RL) to achieve robust and adaptive walking in the presence of real-world uncertainties. Our approach leverages a model-based controller, comprising a Divergent Component of Motion (DCM) trajectory planner and a whole-body controller, as a reliable base policy. To address the uncertainties of inaccurate dynamics modeling and sensor noise, we introduce a residual policy trained through RL with domain randomization. Crucially, we employ a model-based oracle policy, which has privileged access to ground-truth dynamics during training, to supervise the residual policy via a novel supervised loss. This supervision enables the policy to efficiently learn corrective behaviors that compensate for unmodeled effects without extensive reward shaping. Our method demonstrates improved robustness and generalization across a range of randomized conditions, offering a scalable solution for sim-to-real transfer in bipedal locomotion.