Learning Algorithms for Verification of Markov Decision Processes

📅 2024-03-14
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
State-space explosion severely hampers Markov decision process (MDP) verification, particularly for unbounded probabilistic reachability analysis, where exhaustive state exploration is often unavoidable. Method: This paper proposes a general learning-guided, heuristic-driven verification framework that unifies heuristic-directed partial-state exploration with statistical model checking for unbounded properties—without requiring time-boundedness or discounting assumptions—and supports both fully known models and black-box sampling scenarios. The approach integrates symbolic model checking, heuristic search, Monte Carlo sampling, and confidence-interval estimation, extending and correcting prior work by Brázdil et al. Contribution/Results: In the full-model setting, the framework computes exact upper and lower bounds on reachability probabilities; in the black-box setting, it enables efficient approximate verification with significantly reduced computational cost, while providing rigorous probabilistic guarantees and provably guaranteed termination.

Technology Category

Application Category

📝 Abstract
We present a general framework for applying learning algorithms and heuristical guidance to the verification of Markov decision processes (MDPs). The primary goal of our techniques is to improve performance by avoiding an exhaustive exploration of the state space, instead focussing on particularly relevant areas of the system, guided by heuristics. Our work builds on the previous results of Br{'{a}}zdil et al., significantly extending it as well as refining several details and fixing errors. The presented framework focuses on probabilistic reachability, which is a core problem in verification, and is instantiated in two distinct scenarios. The first assumes that full knowledge of the MDP is available, in particular precise transition probabilities. It performs a heuristic-driven partial exploration of the model, yielding precise lower and upper bounds on the required probability. The second tackles the case where we may only sample the MDP without knowing the exact transition dynamics. Here, we obtain probabilistic guarantees, again in terms of both the lower and upper bounds, which provides efficient stopping criteria for the approximation. In particular, the latter is an extension of statistical model-checking (SMC) for unbounded properties in MDPs. In contrast to other related approaches, we do not restrict our attention to time-bounded (finite-horizon) or discounted properties, nor assume any particular structural properties of the MDP.
Problem

Research questions and friction points this paper is trying to address.

Developing heuristic-guided algorithms for efficient MDP verification
Providing probabilistic bounds for reachability without exhaustive state exploration
Extending statistical model-checking to unbounded properties in MDPs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Heuristic-guided partial state space exploration
Statistical sampling without exact transition knowledge
Unbounded probabilistic reachability verification extension
🔎 Similar Papers
No similar papers found.