Multi-level meta-reinforcement learning with skill-based curriculum

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a semantics-preserving hierarchical compression framework for Markov Decision Processes (MDPs) to address sequential decision-making problems with inherent multi-level structures. By abstracting families of low-level policies into atomic actions within a higher-level MDP, the approach decouples subtasks and substantially reduces the policy search space. Leveraging skill embeddings and higher-order functional decomposition, the framework integrates skill-based curriculum learning with meta-reinforcement learning to enable effective cross-task and cross-hierarchy skill transfer. Experimental results in environments such as MazeBase+ demonstrate that the method significantly enhances abstraction capability and curriculum learning efficiency, markedly decreasing both the number of iterations and computational overhead required to solve the MDP.

Technology Category

Application Category

📝 Abstract
We consider problems in sequential decision making with natural multi-level structure, where sub-tasks are assembled together to accomplish complex goals. Systematically inferring and leveraging hierarchical structure has remained a longstanding challenge; we describe an efficient multi-level procedure for repeatedly compressing Markov decision processes (MDPs), wherein a parametric family of policies at one level is treated as single actions in the compressed MDPs at higher levels, while preserving the semantic meanings and structure of the original MDP, and mimicking the natural logic to address a complex MDP. Higher-level MDPs are themselves independent MDPs with less stochasticity, and may be solved using existing algorithms. As a byproduct, spatial or temporal scales may be coarsened at higher levels, making it more efficient to find long-term optimal policies. The multi-level representation delivered by this procedure decouples sub-tasks from each other and usually greatly reduces unnecessary stochasticity and the policy search space, leading to fewer iterations and computations when solving the MDPs. A second fundamental aspect of this work is that these multi-level decompositions plus the factorization of policies into embeddings (problem-specific) and skills (including higher-order functions) yield new transfer opportunities of skills across different problems and different levels. This whole process is framed within curriculum learning, wherein a teacher organizes the student agent's learning process in a way that gradually increases the difficulty of tasks and and promotes transfer across MDPs and levels within and across curricula. The consistency of this framework and its benefits can be guaranteed under mild assumptions. We demonstrate abstraction, transferability, and curriculum learning in examples, including MazeBase+, a more complex variant of the MazeBase example.
Problem

Research questions and friction points this paper is trying to address.

hierarchical reinforcement learning
multi-level MDP
skill transfer
curriculum learning
sequential decision making
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-level reinforcement learning
skill-based curriculum
MDP abstraction
policy factorization
transfer learning
🔎 Similar Papers
No similar papers found.
S
Sichen Yang
Department of Applied Mathematics & Statistics, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD 21218, USA
Mauro Maggioni
Mauro Maggioni
Bloomberg Distinguished Professor of Mathematics, and Applied Mathematics and Statistics
Data ScienceHarmonic AnalysisSignal ProcessingStochastic Dynamical Systems