Refuting Equivalence in Probabilistic Programs with Conditioning

📅 2025-01-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the problem of *refuting posterior distribution equivalence* for probabilistic programs featuring conditioning constructs (e.g., `observe`/`score`), i.e., automatically determining whether two such programs induce distinct output distributions. To tackle this challenge, we propose the *Weighted Restart Transformation*—the first fully automated, theoretically complete, and sound method for refuting equivalence of conditioned probabilistic programs. Our approach transforms a conditioned program into a semantically equivalent unconditioned one, enabling precise analysis via probabilistic semantics modeling and symbolic reasoning. We evaluate our method on standard probabilistic inference benchmarks, demonstrating its effectiveness in efficiently generating *verifiable counterexamples*. This yields the first equivalence-refutation framework for probabilistic programs that simultaneously guarantees full automation and formal correctness.

Technology Category

Application Category

📝 Abstract
We consider the problem of refuting equivalence of probabilistic programs, i.e., the problem of proving that two probabilistic programs induce different output distributions. We study this problem in the context of programs with conditioning (i.e., with observe and score statements), where the output distribution is conditioned by the event that all the observe statements along a run evaluate to true, and where the probability densities of different runs may be updated via the score statements. Building on a recent work on programs without conditioning, we present a new equivalence refutation method for programs with conditioning. Our method is based on weighted restarting, a novel transformation of probabilistic programs with conditioning to the output equivalent probabilistic programs without conditioning that we introduce in this work. Our method is the first to be both a) fully automated, and b) providing provably correct answers. We demonstrate the applicability of our method on a set of programs from the probabilistic inference literature.
Problem

Research questions and friction points this paper is trying to address.

Probabilistic Programs
Randomness
Output Discrepancy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated Verification
Probabilistic Programs
Output Consistency
🔎 Similar Papers
No similar papers found.