🤖 AI Summary
This work addresses the challenge of marginal likelihood (evidence) estimation in likelihood-free inference, where the likelihood function is intractable. We propose the first method to directly approximate the marginal likelihood from the output of sequential neural likelihood estimation (SNLE), without requiring additional simulations or posterior sampling. Leveraging only the density-ratio estimates obtained during SNLE training, our approach constructs an efficient and broadly applicable evidence estimator. Crucially, this is the first work to systematically integrate SNLE with Bayesian model comparison—bridging a key gap in simulation-based inference (SBI). It achieves improved estimation reliability over existing SBI model selection methods while preserving computational efficiency. Extensive experiments on multiple benchmark tasks demonstrate that our estimator closely approximates the true marginal likelihood. This advances both the theoretical foundations and practical applicability of neural density estimation–based inference for model selection.
📝 Abstract
The marginal likelihood, or evidence, plays a central role in Bayesian model selection, yet remains notoriously challenging to compute in likelihood-free settings. While Simulation-Based Inference (SBI) techniques such as Sequential Neural Likelihood Estimation (SNLE) offer powerful tools to approximate posteriors using neural density estimators, they typically do not provide estimates of the evidence. In this technical report presented at BayesComp 2025, we present a simple and general methodology to estimate the marginal likelihood using the output of SNLE.