π€ AI Summary
This work addresses the challenges of slow convergence and inefficiency in conditional inference and optimization for probabilistic programs involving both continuous and discrete variables, which typically rely on sampling-based methods. The authors propose a differentiable Gaussian approximation semantics that integrates Gaussian mixture representations, smooth handling of measure-zero predicates, and automatic differentiation. This approach enables, for the first time, end-to-end, sampling-free gradient-based optimization of probabilistic programs with discrete branching and continuous variables, while preserving differentiability of path probabilities and posterior expressions. Evaluated on 13 benchmark programs, the method achieves accuracy and efficiency comparable to variational inference and MCMC, and demonstrates stable convergence in continuous conditional optimization tasks where sampling-based approaches fail.
π Abstract
We present DeGAS, a differentiable Gaussian approximate semantics for loopless probabilistic programs that enables sample-free, gradient-based optimization in models with both continuous and discrete components. DeGAS evaluates programs under a Gaussian-mixture semantics and replaces measure-zero predicates and discrete branches with a vanishing smoothing, yielding closed-form expressions for posterior and path probabilities. We prove differentiability of these quantities with respect to program parameters, enabling end-to-end optimization via standard automatic differentiation, without Monte Carlo estimators. On thirteen benchmark programs, DeGAS achieves accuracy and runtime competitive with variational inference and MCMC. Importantly, it reliably tackles optimization problems where sampling-based baselines fail to converge due to conditioning involving continuous variables.