🤖 AI Summary
Scalability of multi-objective verification for parametric probabilistic automata (pPAs)—particularly involving parametric probabilistic properties and expected total rewards—remains severely limited.
Method: This paper introduces the first compositional assume-guarantee (AG) reasoning framework for pPAs, proposing three novel AG proof rules: asymmetric, cyclic, and interleaved. It further establishes the first compositional monotonicity reasoning rule for pPAs, enabling modeling of inter-component parameter dependencies. The approach integrates AG reasoning, parametric model checking, multi-objective probabilistic logics (e.g., PCTL+R), and monotonicity analysis.
Results: Experiments demonstrate that the framework efficiently verifies a broad class of multi-objective queries, significantly improving scalability and interpretability—especially under uncertainty and high parameter sensitivity—while preserving verification precision.
📝 Abstract
We establish an assume-guarantee (AG) framework for compositional reasoning about multi-objective queries in parametric probabilistic automata (pPA) - an extension to probabilistic automata (PA), where transition probabilities are functions over a finite set of parameters. We lift an existing framework for PA to the pPA setting, incorporating asymmetric, circular, and interleaving proof rules. Our approach enables the verification of a broad spectrum of multi-objective queries for pPA, encompassing probabilistic properties and (parametric) expected total rewards. Additionally, we introduce a rule for reasoning about monotonicity in composed pPAs.