🤖 AI Summary
This work addresses the challenge of uniformly handling diverse verification objectives in probabilistic programs—such as higher-order moments, probabilities of exceeding reward thresholds, and expected values beyond a budget—which are difficult to reconcile under a single framework. To this end, the authors propose a general program transformation technique that introduces explicit reward statements into the language and applies monotonic function transformations to accumulated rewards, combined with incremental reward differencing. This approach systematically reduces a broad class of verification goals to a canonical form amenable to probabilistic weakest precondition (pwp) reasoning. Integrated into the Caesar deductive verifier, the method enables fully automated verification of complex probabilistic properties, significantly enhancing both the expressiveness and generality of probabilistic program analysis.
📝 Abstract
We present a one-fits-all programmatic approach to reason about a plethora of objectives on probabilistic programs. The first ingredient is to add a reward-statement to the language. We then define a program transformation applying a monotone function to the cumulative reward of the program. The key idea is that this transformation uses incremental differences in the reward. This simple, elegant approach enables to express e.g., higher moments, threshold probabilities of rewards, the expected excess over a budget, and moment-generating functions. All these objectives can now be analyzed using a single existing approach: probabilistic wp-reasoning. We automated verification using the Caesar deductive verifier and report on the application of the transformation to some examples.