Joint Learning of Hierarchical Neural Options and Abstract World Model

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of efficiently learning composable, hierarchical skills for flexible agent behavior under data-constrained conditions. We propose AgentOWL, a novel approach that jointly learns an abstract world model and hierarchical neural options through dual abstraction over both state and temporal dimensions. By integrating hierarchical reinforcement learning, object-centric modeling, and neural option mechanisms, AgentOWL significantly enhances sample efficiency and skill reusability. Evaluated on an object-centric subset of Atari environments, our method learns a richer repertoire of composable skills using substantially less training data and outperforms existing baselines by a significant margin.

Technology Category

Application Category

📝 Abstract
Building agents that can perform new skills by composing existing skills is a long-standing goal of AI agent research. Towards this end, we investigate how to efficiently acquire a sequence of skills, formalized as hierarchical neural options. However, existing model-free hierarchical reinforcement algorithms need a lot of data. We propose a novel method, which we call AgentOWL (Option and World model Learning Agent), that jointly learns -- in a sample efficient way -- an abstract world model (abstracting across both states and time) and a set of hierarchical neural options. We show, on a subset of Object-Centric Atari games, that our method can learn more skills using much less data than baseline methods.
Problem

Research questions and friction points this paper is trying to address.

hierarchical reinforcement learning
skill composition
sample efficiency
abstract world model
neural options
Innovation

Methods, ideas, or system contributions that make the work stand out.

hierarchical reinforcement learning
abstract world model
neural options
sample efficiency
skill composition