Regret-Free Reinforcement Learning for LTL Specifications

πŸ“… 2024-11-18
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 1
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the online synthesis of control policies satisfying Linear Temporal Logic (LTL) specifications for safety-critical systems operating under unknown Markov Decision Processes (MDPs). Existing approaches provide only asymptotic performance guarantees and lack instantaneous performance assurances during learning. To overcome this limitation, we propose the first online no-regret reinforcement learning algorithm applicable to arbitrary LTL specifications. Our method reformulates LTL synthesis as a reach-avoid graph game and introduces a dedicated probabilistic graph structure learning module, integrated with MDP modeling, LTL automaton construction, and hierarchical control synthesis. We theoretically prove that the algorithm achieves zero cumulative regret within a finite number of steps, delivering rigorous, verifiable finite-time performance guarantees for any LTL specification over finite-state/finite-action MDPsβ€”thereby breaking the reliance on asymptotic convergence inherent in prior methods.

Technology Category

Application Category

πŸ“ Abstract
Reinforcement learning (RL) is a promising method to learn optimal control policies for systems with unknown dynamics. In particular, synthesizing controllers for safety-critical systems based on high-level specifications, such as those expressed in temporal languages like linear temporal logic (LTL), presents a significant challenge in control systems research. Current RL-based methods designed for LTL tasks typically offer only asymptotic guarantees, which provide no insight into the transient performance during the learning phase. While running an RL algorithm, it is crucial to assess how close we are to achieving optimal behavior if we stop learning. In this paper, we present the first regret-free online algorithm for learning a controller that addresses the general class of LTL specifications over Markov decision processes (MDPs) with a finite set of states and actions. We begin by proposing a regret-free learning algorithm to solve infinite-horizon reach-avoid problems. For general LTL specifications, we show that the synthesis problem can be reduced to a reach-avoid problem when the graph structure is known. Additionally, we provide an algorithm for learning the graph structure, assuming knowledge of a minimum transition probability, which operates independently of the main regret-free algorithm.
Problem

Research questions and friction points this paper is trying to address.

Learn regret-free control for LTL specifications with unknown dynamics
Reduce LTL synthesis to reach-avoid problems using MDPs
Provide finite-time performance bounds for LTL controller synthesis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Regret-free online LTL controller synthesis
Reduction to reach-avoid MDP problems
Graph structure learning with probability bounds
πŸ”Ž Similar Papers
R
R. Majumdar
MPI-SWS, Kaiserslautern, Germany
Mahmoud Salamati
Mahmoud Salamati
Max Planck Institute for Software Systems
Cyber-physical systemsMachine learningFormal methods
S
S. Soudjani
MPI-SWS, Kaiserslautern, Germany