🤖 AI Summary
Discounted reinforcement learning (RL) suffers from semantic mismatch with ω-regular specifications—particularly absolute liveness properties—in infinite-horizon persistent tasks. Method: We propose the first model-free average-reward RL framework enabling reset-free online learning in unknown communicating Markov decision processes (MDPs). Our approach (1) introduces the first end-to-end automatic reduction of ω-regular specifications to average-reward objectives; (2) designs a lexicographic multi-objective reward structure jointly optimizing specification satisfaction probability and long-run average reward; and (3) integrates online state abstraction with convergence guarantees. Contribution/Results: We prove global convergence under the communicating MDP assumption. Experiments demonstrate substantial improvements over discounted RL baselines across multiple benchmarks, achieving specification-driven real-time policy learning without requiring prior knowledge of the environment.
📝 Abstract
Recent advances in reinforcement learning (RL) have renewed focus on the design of reward functions that shape agent behavior. Manually designing reward functions is tedious and error-prone. A principled alternative is to specify behaviors in a formal language that can be automatically translated into rewards. Omega-regular languages are a natural choice for this purpose, given their established role in formal verification and synthesis. However, existing methods using omega-regular specifications typically rely on discounted reward RL in episodic settings, with periodic resets. This setup misaligns with the semantics of omega-regular specifications, which describe properties over infinite behavior traces. In such cases, the average reward criterion and the continuing setting -- where the agent interacts with the environment over a single, uninterrupted lifetime -- are more appropriate. To address the challenges of infinite-horizon, continuing tasks, we focus on absolute liveness specifications -- a subclass of omega-regular languages that cannot be violated by any finite behavior prefix, making them well-suited to the continuing setting. We present the first model-free RL framework that translates absolute liveness specifications to average-reward objectives. Our approach enables learning in communicating MDPs without episodic resetting. We also introduce a reward structure for lexicographic multi-objective optimization, aiming to maximize an external average-reward objective among the policies that also maximize the satisfaction probability of a given omega-regular specification. Our method guarantees convergence in unknown communicating MDPs and supports on-the-fly reductions that do not require full knowledge of the environment, thus enabling model-free RL. Empirical results show our average-reward approach in continuing setting outperforms discount-based methods across benchmarks.