๐ค AI Summary
Quadrupedal gait learning suffers from low energy efficiency, reliance on predefined gaits, and complex reward engineering. Method: We propose a velocity-adaptive, simplified center-of-energy reward mechanismโfirst directly embedding biologically inspired energy minimization into the reinforcement learning reward function. Built upon the PPO algorithm, the method is trained in IsaacGym and deployed on physical ANYmal-C and Unitree Go1 robots. Without prior gait knowledge, it autonomously selects optimal gaits across the full speed spectrum (e.g., tetrapod walking at low speeds, diagonal trotting at high speeds) and enables smooth gait transitions. Results: Experiments demonstrate stable velocity tracking in both simulation and real-world deployment, with significantly lower energy consumption than conventional multi-stage reward approaches. The method achieves superior energy efficiency and generalizability, validating its effectiveness for adaptive, efficient quadrupedal locomotion.
๐ Abstract
In reinforcement learning for legged robot locomotion, crafting effective reward strategies is crucial. Pre-defined gait patterns and complex reward systems are widely used to stabilize policy training. Drawing from the natural locomotion behaviors of humans and animals, which adapt their gaits to minimize energy consumption, we propose a simplified, energy-centric reward strategy to foster the development of energy-efficient locomotion across various speeds in quadruped robots. By implementing an adaptive energy reward function and adjusting the weights based on velocity, we demonstrate that our approach enables ANYmal-C and Unitree Go1 robots to autonomously select appropriate gaits, such as four-beat walking at lower speeds and trotting at higher speeds, resulting in improved energy efficiency and stable velocity tracking compared to previous methods using complex reward designs and prior gait knowledge. The effectiveness of our policy is validated through simulations in the IsaacGym simulation environment and on real robots, demonstrating its potential to facilitate stable and adaptive locomotion.