🤖 AI Summary
Existing robot locomotion learning methods rely on offline tuning of reward weights, making it difficult to strictly satisfy physical constraints during training. This work proposes ROGER, an embodied-interaction-based online reward gain adaptation framework that dynamically adjusts the relative weighting between the primary reward and constraint penalty terms via real-time penalty signals: reducing the gain ratio near constraint boundaries to prioritize safety, and increasing it in safe regions to emphasize performance optimization. ROGER is the first learning framework enabling full-sequence online adaptive reward weighting without manual hyperparameter tuning, achieving near-zero constraint violation rates throughout training. On a 60-kg quadrupedal robot, it enables hour-scale, fall-free real-world learning from scratch. In MuJoCo simulations, it improves the primary reward by 50%, doubles overall task performance, and reduces torque consumption and pose deviation by 60% each.
📝 Abstract
Existing robot locomotion learning techniques rely heavily on the offline selection of proper reward weighting gains and cannot guarantee constraint satisfaction (i.e., constraint violation) during training. Thus, this work aims to address both issues by proposing Reward-Oriented Gains via Embodied Regulation (ROGER), which adapts reward-weighting gains online based on penalties received throughout the embodied interaction process. The ratio between the positive reward (primary reward) and negative reward (penalty) gains is automatically reduced as the learning approaches the constraint thresholds to avoid violation. Conversely, the ratio is increased when learning is in safe states to prioritize performance. With a 60-kg quadruped robot, ROGER achieved near-zero constraint violation throughout multiple learning trials. It also achieved up to 50% more primary reward than the equivalent state-of-the-art techniques. In MuJoCo continuous locomotion benchmarks, including a single-leg hopper, ROGER exhibited comparable or up to 100% higher performance and 60% less torque usage and orientation deviation compared to those trained with the default reward function. Finally, real-world locomotion learning of a physical quadruped robot was achieved from scratch within one hour without any falls. Therefore, this work contributes to constraint-satisfying real-world continual robot locomotion learning and simplifies reward weighting gain tuning, potentially facilitating the development of physical robots and those that learn in the real world.