π€ AI Summary
This work addresses the online synthesis of control policies satisfying Linear Temporal Logic (LTL) specifications for safety-critical systems operating under unknown Markov Decision Processes (MDPs). Existing approaches provide only asymptotic performance guarantees and lack instantaneous performance assurances during learning. To overcome this limitation, we propose the first online no-regret reinforcement learning algorithm applicable to arbitrary LTL specifications. Our method reformulates LTL synthesis as a reach-avoid graph game and introduces a dedicated probabilistic graph structure learning module, integrated with MDP modeling, LTL automaton construction, and hierarchical control synthesis. We theoretically prove that the algorithm achieves zero cumulative regret within a finite number of steps, delivering rigorous, verifiable finite-time performance guarantees for any LTL specification over finite-state/finite-action MDPsβthereby breaking the reliance on asymptotic convergence inherent in prior methods.
π Abstract
Reinforcement learning (RL) is a promising method to learn optimal control policies for systems with unknown dynamics. In particular, synthesizing controllers for safety-critical systems based on high-level specifications, such as those expressed in temporal languages like linear temporal logic (LTL), presents a significant challenge in control systems research. Current RL-based methods designed for LTL tasks typically offer only asymptotic guarantees, which provide no insight into the transient performance during the learning phase. While running an RL algorithm, it is crucial to assess how close we are to achieving optimal behavior if we stop learning. In this paper, we present the first regret-free online algorithm for learning a controller that addresses the general class of LTL specifications over Markov decision processes (MDPs) with a finite set of states and actions. We begin by proposing a regret-free learning algorithm to solve infinite-horizon reach-avoid problems. For general LTL specifications, we show that the synthesis problem can be reduced to a reach-avoid problem when the graph structure is known. Additionally, we provide an algorithm for learning the graph structure, assuming knowledge of a minimum transition probability, which operates independently of the main regret-free algorithm.