Decoupling Task and Behavior: A Two-Stage Reward Curriculum in Reinforcement Learning for Robotics

📅 2026-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of balancing task completion and behavioral characteristics—such as energy efficiency—in robotic control via reinforcement learning, where designing multi-objective reward functions is often complex and sensitive. The authors propose a two-stage reward curriculum: in the first stage, a simplified reward containing only the primary task objective facilitates effective exploration; in the second stage, a full reward incorporating auxiliary behavioral terms is optimized, leveraging experience collected in the initial stage to enhance training stability. By decoupling task and behavioral objectives, this approach avoids the pitfalls of joint multi-objective optimization. Experiments on the DeepMind Control Suite, ManiSkill3, and a real-world mobile robot demonstrate that the method significantly outperforms baselines that directly optimize the full reward and exhibits greater robustness to variations in reward weighting.

Technology Category

Application Category

📝 Abstract
Deep Reinforcement Learning is a promising tool for robotic control, yet practical application is often hindered by the difficulty of designing effective reward functions. Real-world tasks typically require optimizing multiple objectives simultaneously, necessitating precise tuning of their weights to learn a policy with the desired characteristics. To address this, we propose a two-stage reward curriculum where we decouple task-specific objectives from behavioral terms. In our method, we first train the agent on a simplified task-only reward function to ensure effective exploration before introducing the full reward that includes auxiliary behavior-related terms such as energy efficiency. Further, we analyze various transition strategies and demonstrate that reusing samples between phases is critical for training stability. We validate our approach on the DeepMind Control Suite, ManiSkill3, and a mobile robot environment, modified to include auxiliary behavioral objectives. Our method proves to be simple yet effective, substantially outperforming baselines trained directly on the full reward while exhibiting higher robustness to specific reward weightings.
Problem

Research questions and friction points this paper is trying to address.

reward design
multi-objective optimization
reinforcement learning
robotic control
reward shaping
Innovation

Methods, ideas, or system contributions that make the work stand out.

reward curriculum
decoupling task and behavior
reinforcement learning
sample reuse
multi-objective reward
🔎 Similar Papers
No similar papers found.