noDice: Inference for Discrete Probabilistic Programs with Nondeterminism and Conditioning

📅 2026-02-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that existing probabilistic programming languages struggle to effectively support inference in discrete probabilistic programs involving nondeterminism—such as choices in Markov decision processes (MDPs)—and conditioning. It introduces, for the first time, a framework that integrates both nondeterminism and conditioning into discrete probabilistic programming by modeling the semantics of loop-free programs as MDPs, where program behaviors are characterized by scheduling policies. The authors extend the Dice inference engine to support this MDP-based semantics, employing decision diagrams as an intermediate representation and leveraging static analysis to achieve substantial state-space compression. This approach dramatically reduces the size of the underlying MDP, enabling exact and efficient inference for complex discrete probabilistic programs and thereby expanding the applicability of probabilistic programming to domains such as reinforcement learning.

Technology Category

Application Category

📝 Abstract
Probabilistic programming languages (PPLs) are an expressive and intuitive means of representing complex probability distributions. In that realm, languages like Dice target an important class of probabilistic programs: those whose probability distributions are discrete. Discrete distributions are common in many fields, including text analysis, network verification, artificial intelligence, and graph analysis. Another important feature in the world of probabilistic modeling are nondeterministic choices as found in Markov Decision Processes (MDPs) which play a major role in reinforcement learning. Modern PPLs usually lack support for nondeterminism. We address this gap with the introduction of noDice, which extends the discrete probabilistic inference engine Dice. noDice performs inference on loop-free programs by constructing an MDP so that the distributions modeled by the program correspond to schedulers in the MDP. Furthermore, decision diagrams are used as an intermediate step to exploit the program structure and drastically reduce the state space of the MDP.
Problem

Research questions and friction points this paper is trying to address.

probabilistic programming
discrete distributions
nondeterminism
Markov Decision Processes
inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

probabilistic programming
nondeterminism
Markov Decision Processes
decision diagrams
discrete inference
🔎 Similar Papers
No similar papers found.