🤖 AI Summary
Training deep reinforcement learning agents under multi-objective conflicting rewards often suffers from instability and difficulty balancing task performance against constraint satisfaction. To address this, we propose a two-stage reward curriculum learning framework: an initial phase optimizes a simplified reward to accelerate convergence, followed by a smooth transition to the full, complex reward. We introduce a novel Actor-Critic fidelity criterion for automatic, dynamic stage switching and design a flexible replay buffer enabling cross-phase sample reuse. Our approach integrates curriculum learning, dynamic reward shaping, and adaptive experience replay. Evaluated on the DeepMind Control Suite—including tasks with explicit constraints—and real-world mobile robot navigation, our method significantly outperforms non-curriculum baselines, achieving a more robust trade-off between task success rate and constraint violation rate.
📝 Abstract
Reinforcement learning (RL) has emerged as a powerful tool for tackling control problems, but its practical application is often hindered by the complexity arising from intricate reward functions with multiple terms. The reward hypothesis posits that any objective can be encapsulated in a scalar reward function, yet balancing individual, potentially adversarial, reward terms without exploitation remains challenging. To overcome the limitations of traditional RL methods, which often require precise balancing of competing reward terms, we propose a two-stage reward curriculum that first maximizes a simple reward function and then transitions to the full, complex reward. We provide a method based on how well an actor fits a critic to automatically determine the transition point between the two stages. Additionally, we introduce a flexible replay buffer that enables efficient phase transfer by reusing samples from one stage in the next. We evaluate our method on the DeepMind control suite, modified to include an additional constraint term in the reward definitions. We further evaluate our method in a mobile robot scenario with even more competing reward terms. In both settings, our two-stage reward curriculum achieves a substantial improvement in performance compared to a baseline trained without curriculum. Instead of exploiting the constraint term in the reward, it is able to learn policies that balance task completion and constraint satisfaction. Our results demonstrate the potential of two-stage reward curricula for efficient and stable RL in environments with complex rewards, paving the way for more robust and adaptable robotic systems in real-world applications.