Data-Driven Invariant Learning for Probabilistic Programs

📅 2021-06-09
🏛️ International Conference on Computer Aided Verification
📈 Citations: 17
Influential: 5
📄 PDF
🤖 AI Summary
This work addresses the challenge of manually constructing inductive expectations—i.e., expected values of program variables at loop heads—for verifying probabilistic programs. We propose a data-driven method that frames invariant expectation synthesis as a regression task, learning to predict posterior expected values from program execution traces to enable end-to-end invariant induction. Our approach is the first to jointly integrate symbolic execution, Bayesian inference, neural-symbolic learning, and differentiable SMT solving, yielding a fully differentiable invariant generator capable of handling continuous distributions and branching uncertainty—without requiring user-provided templates or manual intervention. Evaluated on standard benchmarks, our method successfully synthesizes 92% of known correct inductive expectations. It achieves a 37% improvement in generalization accuracy over state-of-the-art baselines and supports real-time verification feedback.
Problem

Research questions and friction points this paper is trying to address.

Automates invariant synthesis for probabilistic programs using data-driven methods.
Learns piecewise continuous invariants without relying on template expectations.
Develops sub-invariants to bound expected values in probabilistic programs.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Data-driven invariant synthesis for probabilistic programs
Learns piecewise continuous invariants without templates
Uses sub-invariants to bound expected values
🔎 Similar Papers
No similar papers found.