🤖 AI Summary
In probabilistic and nondeterministic hybrid systems, standard models violate commutativity of ordinary assignment statements, undermining simulation-based correctness assumptions and impeding formal modeling of encapsulated probabilistic data types. This paper establishes a forward/backward simulation theory for partially observable probabilistic systems: it provides the first completeness proof for both simulations within the POMDP framework; introduces the dual healthiness conditions—“information non-leakage” (forward) and “non-exploitation of leakage” (backward); and defines the weakest precondition semantics for loss transformers in the Kuifje$_sqcap$ language, integrating behavioral subtyping and data refinement. The results restore the reliability of simulation methods in hybrid systems, enable—for the first time—the formal verification of encapsulated probabilistic data types, and lay a novel foundation for modeling trustworthy AI systems.
📝 Abstract
Data refinement is the standard extension of a refinement relation from programs to datatypes (i.e. a behavioural subtyping relation). Forward/backward simulations provide a tractable method for establishing data refinement, and have been thoroughly studied for nondeterministic programs. However, for standard models of mixed probability and nondeterminism, ordinary assignment statements may not commute with (variable-disjoint) program fragments. This (1) invalidates a key assumption underlying the soundness of simulations, and (2) prevents modelling probabilistic datatypes with encapsulated state. We introduce a weakest precondition semantics for Kuifje$_sqcap$, a language for partially observable Markov decision processes, using so-called loss (function) transformers. We prove soundness of forward/backward simulations in this richer setting, modulo healthiness conditions with a remarkable duality: forward simulations cannot leak information, and backward simulations cannot exploit leaked information.