🤖 AI Summary
This work addresses likelihood-free simulation-based inference (SBI), where the likelihood is intractable. To model complex, simulator-induced posterior distributions—often highly structured and multi-modal—we propose an efficient variational autoencoder (VAE)-based posterior estimation framework. Our method incorporates two complementary prior mechanisms: (1) a data-adaptive multivariate prior network to improve generalization across queries, and (2) a standard Gaussian prior to preserve model simplicity and expressiveness. End-to-end variational inference enables scalable, generative posterior approximation. On standard SBI benchmarks, our approach matches the accuracy of state-of-the-art normalizing flow–based methods while substantially reducing training and inference costs—yielding superior computational efficiency and scalability. The core contribution is the systematic integration of the VAE paradigm into SBI, establishing a lightweight, robust, and easily deployable solution for large-scale, high-dimensional simulation-based inference.
📝 Abstract
We present a generative modeling approach based on the variational inference framework for likelihood-free simulation-based inference. The method leverages latent variables within variational autoencoders to efficiently estimate complex posterior distributions arising from stochastic simulations. We explore two variations of this approach distinguished by their treatment of the prior distribution. The first model adapts the prior based on observed data using a multivariate prior network, enhancing generalization across various posterior queries. In contrast, the second model utilizes a standard Gaussian prior, offering simplicity while still effectively capturing complex posterior distributions. We demonstrate the efficacy of these models on well-established benchmark problems, achieving results comparable to flow-based approaches while maintaining computational efficiency and scalability.