🤖 AI Summary
In large-scale sparse-reward environments, agents often fail to reach goals and exhibit poor policy generalization. Method: This paper proposes a first-order logic–based goal-conditioned reinforcement learning framework. It introduces first-order atomic sets for state and goal representation; integrates Hindsight Experience Replay (HER) for relation-level goal relabeling; and designs subgoal abstraction and uplifted goal generation mechanisms to construct an automatic curriculum learning pathway. Contributions/Results: (1) It establishes an interpretable, composable relational policy representation; (2) it significantly improves sample and computational efficiency; and (3) it empirically validates the effectiveness and scalability of two goal abstraction paradigms on complex planning tasks, enabling cross-task transferable general-purpose policy learning.
📝 Abstract
First-order relational languages have been used in MDP planning and reinforcement learning (RL) for two main purposes: specifying MDPs in compact form, and representing and learning policies that are general and not tied to specific instances or state spaces. In this work, we instead consider the use of first-order languages in goal-conditioned RL and generalized planning. The question is how to learn goal-conditioned and general policies when the training instances are large and the goal cannot be reached by random exploration alone. The technique of Hindsight Experience Replay (HER) provides an answer to this question: it relabels unsuccessful trajectories as successful ones by replacing the original goal with one that was actually achieved. If the target policy must generalize across states and goals, trajectories that do not reach the original goal states can enable more data- and time-efficient learning. In this work, we show that further performance gains can be achieved when states and goals are represented by sets of atoms. We consider three versions: goals as full states, goals as subsets of the original goals, and goals as lifted versions of these subgoals. The result is that the latter two successfully learn general policies on large planning instances with sparse rewards by automatically creating a curriculum of easier goals of increasing complexity. The experiments illustrate the computational gains of these versions, their limitations, and opportunities for addressing them.