🤖 AI Summary
This work addresses the challenges of combinatorial explosion and high-dimensional observations in goal-conditioned reinforcement learning under scenarios involving multiple entities, long task horizons, and sparse rewards. To tackle these issues, the authors propose an entity-centric hierarchical framework: the upper level employs a conditional diffusion model to generate factorized subgoals, while the lower level utilizes goal-conditioned reinforcement learning agents to execute tasks, guided by a value function for effective subgoal selection. The approach is designed to be plug-and-play compatible with existing goal-conditioned RL algorithms and significantly enhances generalization. Evaluated on image-based sparse-reward tasks, the method achieves over a 150% improvement in success rate and demonstrates effective scalability to settings with more entities and longer task horizons.
📝 Abstract
We propose a hierarchical entity-centric framework for offline Goal-Conditioned Reinforcement Learning (GCRL) that combines subgoal decomposition with factored structure to solve long-horizon tasks in domains with multiple entities. Achieving long-horizon goals in complex environments remains a core challenge in Reinforcement Learning (RL). Domains with multiple entities are particularly difficult due to their combinatorial complexity. GCRL facilitates generalization across goals and the use of subgoal structure, but struggles with high-dimensional observations and combinatorial state-spaces, especially under sparse reward. We employ a two-level hierarchy composed of a value-based GCRL agent and a factored subgoal-generating conditional diffusion model. The RL agent and subgoal generator are trained independently and composed post hoc through selective subgoal generation based on the value function, making the approach modular and compatible with existing GCRL algorithms. We introduce new variations to benchmark tasks that highlight the challenges of multi-entity domains, and show that our method consistently boosts performance of the underlying RL agent on image-based long-horizon tasks with sparse rewards, achieving over 150% higher success rates on the hardest task in our suite and generalizing to increasing horizons and numbers of entities. Rollout videos are provided at: https://sites.google.com/view/hecrl