Synthesis of Hierarchical Controllers Based on Deep Reinforcement Learning Policies

📅 2024-02-21
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses hierarchical control in a two-level environment: a high-level known graph (“map”) whose vertices represent unknown, dynamically evolving MDPs (“rooms”). We propose an end-to-end controller framework integrating deep reinforcement learning (DRL) with reactive synthesis. Methodologically, the low level trains reusable latent policies within each room using PAC learning theory—avoiding model distillation and improving robustness to sparse rewards; the high level employs reactive synthesis to generate a dynamic scheduler satisfying Linear Temporal Logic (LTL) specifications. Theoretically, we establish the first PAC performance guarantees for hierarchical policies and derive bounds on abstraction quality. Experimentally, on navigation tasks with dynamic obstacles, our framework significantly improves policy generalization across rooms and enhances reliability of high-level scheduling decisions.

Technology Category

Application Category

📝 Abstract
We propose a novel approach to the problem of controller design for environments modeled as Markov decision processes (MDPs). Specifically, we consider a hierarchical MDP a graph with each vertex populated by an MDP called a"room". We first apply deep reinforcement learning (DRL) to obtain low-level policies for each room, scaling to large rooms of unknown structure. We then apply reactive synthesis to obtain a high-level planner that chooses which low-level policy to execute in each room. The central challenge in synthesizing the planner is the need for modeling rooms. We address this challenge by developing a DRL procedure to train concise"latent"policies together with PAC guarantees on their performance. Unlike previous approaches, ours circumvents a model distillation step. Our approach combats sparse rewards in DRL and enables reusability of low-level policies. We demonstrate feasibility in a case study involving agent navigation amid moving obstacles.
Problem

Research questions and friction points this paper is trying to address.

Designing controllers for two-level structured environments with formal guarantees
Training low-level policies without model distillation for scalability
Ensuring reusability and performance guarantees in high-level task planning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reactive synthesis for high-level task planning
Reinforcement learning for low-level policy training
Formal guarantees on performance and abstraction quality
🔎 Similar Papers
2024-05-28International Conference on Learning RepresentationsCitations: 10