Enabling Option Learning in Sparse Rewards with Hindsight Experience Replay

📅 2026-02-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of sparse rewards and credit assignment in multi-goal object manipulation tasks, where existing hierarchical reinforcement learning methods struggle to effectively link actions with distant outcomes. To overcome this limitation, the authors integrate Hindsight Experience Replay (HER) into the Multi-updates Option Critic (MOC) framework and propose a novel dual-goal HER (2HER) mechanism. This approach simultaneously generates relabeled goals based on both the final object state and the agent’s end-effector position, enabling more effective hindsight relabeling. By incorporating dual-goal hindsight strategies into hierarchical reinforcement learning for the first time, the method achieves a 90% success rate on robotic manipulation tasks—substantially outperforming both MOC and MOC-HER, which attain less than 11% success—thereby offering an effective solution to learning under sparse reward conditions.

Technology Category

Application Category

📝 Abstract
Hierarchical Reinforcement Learning (HRL) frameworks like Option-Critic (OC) and Multi-updates Option Critic (MOC) have introduced significant advancements in learning reusable options. However, these methods underperform in multi-goal environments with sparse rewards, where actions must be linked to temporally distant outcomes. To address this limitation, we first propose MOC-HER, which integrates the Hindsight Experience Replay (HER) mechanism into the MOC framework. By relabeling goals from achieved outcomes, MOC-HER can solve sparse reward environments that are intractable for the original MOC. However, this approach is insufficient for object manipulation tasks, where the reward depends on the object reaching the goal rather than on the agent's direct interaction. This makes it extremely difficult for HRL agents to discover how to interact with these objects. To overcome this issue, we introduce Dual Objectives Hindsight Experience Replay (2HER), a novel extension that creates two sets of virtual goals. In addition to relabeling goals based on the object's final state (standard HER), 2HER also generates goals from the agent's effector positions, rewarding the agent for both interacting with the object and completing the task. Experimental results in robotic manipulation environments show that MOC-2HER achieves success rates of up to 90%, compared to less than 11% for both MOC and MOC-HER. These results highlight the effectiveness of our dual objective relabeling strategy in sparse reward, multi-goal tasks.
Problem

Research questions and friction points this paper is trying to address.

Hierarchical Reinforcement Learning
Sparse Rewards
Option Learning
Multi-goal Environments
Object Manipulation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical Reinforcement Learning
Hindsight Experience Replay
Sparse Rewards
Option Learning
Goal Relabeling
🔎 Similar Papers
No similar papers found.