🤖 AI Summary
This work addresses the challenge of manually constructing inductive expectations—i.e., expected values of program variables at loop heads—for verifying probabilistic programs. We propose a data-driven method that frames invariant expectation synthesis as a regression task, learning to predict posterior expected values from program execution traces to enable end-to-end invariant induction. Our approach is the first to jointly integrate symbolic execution, Bayesian inference, neural-symbolic learning, and differentiable SMT solving, yielding a fully differentiable invariant generator capable of handling continuous distributions and branching uncertainty—without requiring user-provided templates or manual intervention. Evaluated on standard benchmarks, our method successfully synthesizes 92% of known correct inductive expectations. It achieves a 37% improvement in generalization accuracy over state-of-the-art baselines and supports real-time verification feedback.