🤖 AI Summary
This work addresses safety verification and policy synthesis for stochastic control systems under known noise distributions. We propose a safety-critical control modeling framework based on average-reward Markov decision processes (MDPs). Our key contribution is the first rigorous reduction of high-confidence state-constraint satisfaction to a standard average-reward MDP formulation, enabling direct application of mature optimization tools—such as linear programming—to synthesize safe policies. Unlike conventional discounted-reward approaches, our formulation eliminates bias induced by discounting and avoids slow convergence. Experimental evaluation on the Double Integrator and inverted pendulum benchmarks demonstrates that the synthesized policies significantly outperform baseline minimum-discounted-reward policies in three critical aspects: convergence speed, completeness of safe-state coverage, and overall policy quality.
📝 Abstract
Safety in stochastic control systems, which are subject to random noise with a known probability distribution, aims to compute policies that satisfy predefined operational constraints with high confidence throughout the uncertain evolution of the state variables. The unpredictable evolution of state variables poses a significant challenge for meeting predefined constraints using various control methods. To address this, we present a new algorithm that computes safe policies to determine the safety level across a finite state set. This algorithm reduces the safety objective to the standard average reward Markov Decision Process (MDP) objective. This reduction enables us to use standard techniques, such as linear programs, to compute and analyze safe policies. We validate the proposed method numerically on the Double Integrator and the Inverted Pendulum systems. Results indicate that the average-reward MDPs solution is more comprehensive, converges faster, and offers higher quality compared to the minimum discounted-reward solution.