🤖 AI Summary
This paper addresses hierarchical control in a two-level environment: a high-level known graph (“map”) whose vertices represent unknown, dynamically evolving MDPs (“rooms”). We propose an end-to-end controller framework integrating deep reinforcement learning (DRL) with reactive synthesis. Methodologically, the low level trains reusable latent policies within each room using PAC learning theory—avoiding model distillation and improving robustness to sparse rewards; the high level employs reactive synthesis to generate a dynamic scheduler satisfying Linear Temporal Logic (LTL) specifications. Theoretically, we establish the first PAC performance guarantees for hierarchical policies and derive bounds on abstraction quality. Experimentally, on navigation tasks with dynamic obstacles, our framework significantly improves policy generalization across rooms and enhances reliability of high-level scheduling decisions.
📝 Abstract
We propose a novel approach to the problem of controller design for environments modeled as Markov decision processes (MDPs). Specifically, we consider a hierarchical MDP a graph with each vertex populated by an MDP called a"room". We first apply deep reinforcement learning (DRL) to obtain low-level policies for each room, scaling to large rooms of unknown structure. We then apply reactive synthesis to obtain a high-level planner that chooses which low-level policy to execute in each room. The central challenge in synthesizing the planner is the need for modeling rooms. We address this challenge by developing a DRL procedure to train concise"latent"policies together with PAC guarantees on their performance. Unlike previous approaches, ours circumvents a model distillation step. Our approach combats sparse rewards in DRL and enables reusability of low-level policies. We demonstrate feasibility in a case study involving agent navigation amid moving obstacles.