🤖 AI Summary
This paper investigates **relational reachability verification** for Markov decision processes (MDPs): determining whether there exists a scheduler such that the reachability probabilities to two target state sets satisfy a given relational constraint (e.g., inequality). This problem generalizes conventional single-threshold probabilistic verification and enables rigorous analysis of randomized algorithms and security protocols. We establish, for the first time, the decidability boundary of this problem, precisely characterizing tractable subclasses and proving NP- and PSPACE-hardness for natural variants. Methodologically, we propose a hybrid algorithm integrating symbolic state compression, linear programming, and constraint solving, enhanced by scheduler-space pruning and probabilistic differential modeling. We implement an open-source tool and evaluate it on standard benchmarks: it outperforms existing probabilistic hyperlogic solvers by several orders of magnitude and successfully verifies diverse security protocols and distributed randomized algorithms.
📝 Abstract
Markov decision processes model systems subject to nondeterministic and probabilistic uncertainty. A plethora of verification techniques addresses variations of reachability properties, such as: Is there a scheduler resolving the nondeterminism such that the probability to reach an error state is above a threshold? We consider an understudied extension that relates different reachability probabilities, such as: Is there a scheduler such that two sets of states are reached with different probabilities? These questions appear naturally in the design of randomized algorithms and in various security applications. We provide a tractable algorithm for many variations of this problem, while proving computational hardness of some others. An implementation of our algorithm beats solvers for more general probabilistic hyperlogics by orders of magnitude, on the subset of their benchmarks that are within our fragment.