Reinforcement Learning with $ω$-Regular Objectives and Constraints

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional reinforcement learning relies on scalar rewards, which inadequately capture temporal dependencies, conditional logic, and safety-critical constraints—leading to reward hacking and unsafe behavior. Method: We propose a novel paradigm integrating ω-regular objectives with explicit safety constraints, the first to employ ω-regular properties jointly for both optimization goals and constraint specification—thereby decoupling performance optimization from risk control. Our approach combines model-based RL, ω-regular automata translation, and linear programming to formulate a constrained average-reward optimization framework that jointly maximizes the probability of satisfying the ω-regular objective while guaranteeing safety under a user-specified risk threshold. Contribution/Results: We provide theoretical guarantees that our algorithm converges to an optimal policy satisfying the ω-regular specification *and* strictly adhering to all safety constraints. The framework ensures expressive specification capability, formal safety enforcement, and optimality—bridging rigorous temporal logic reasoning with practical RL optimization.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) commonly relies on scalar rewards with limited ability to express temporal, conditional, or safety-critical goals, and can lead to reward hacking. Temporal logic expressible via the more general class of $ω$-regular objectives addresses this by precisely specifying rich behavioural properties. Even still, measuring performance by a single scalar (be it reward or satisfaction probability) masks safety-performance trade-offs that arise in settings with a tolerable level of risk. We address both limitations simultaneously by combining $ω$-regular objectives with explicit constraints, allowing safety requirements and optimisation targets to be treated separately. We develop a model-based RL algorithm based on linear programming, which in the limit produces a policy maximising the probability of satisfying an $ω$-regular objective while also adhering to $ω$-regular constraints within specified thresholds. Furthermore, we establish a translation to constrained limit-average problems with optimality-preserving guarantees.
Problem

Research questions and friction points this paper is trying to address.

Extending reinforcement learning beyond scalar rewards using ω-regular objectives
Addressing safety-performance trade-offs through explicit ω-regular constraints
Developing algorithms to maximize objective satisfaction while respecting constraint thresholds
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combining ω-regular objectives with explicit constraints
Model-based RL algorithm using linear programming
Translation to constrained limit-average problems with guarantees
🔎 Similar Papers
2024-07-09Neural Information Processing SystemsCitations: 3