An efficient likelihood-free Bayesian inference method based on sequential neural posterior estimation

📅 2023-11-21
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
For high-dimensional simulator-based models with intractable likelihoods, this paper proposes an efficient and stable Sequential Neural Posterior Estimation (SNPE) method. The approach employs conditional neural density estimation within a sequential simulation framework, augmented by an adaptive calibration kernel mechanism—novelly introduced herein—to dynamically adjust kernel weights during inference. To further enhance stability and accelerate convergence, we integrate importance-weighted gradient variance reduction with Monte Carlo loss optimization. This combination effectively mitigates the inference bottlenecks inherent in high-dimensional settings while preserving posterior approximation accuracy. Extensive experiments on multiple benchmark simulators and real-world high-dimensional datasets demonstrate that our method achieves over a two-fold speedup in training time and reduces posterior approximation error by more than 30% compared to standard SNPE and other state-of-the-art approaches.
📝 Abstract
Sequential neural posterior estimation (SNPE) techniques have been recently proposed for dealing with simulation-based models with intractable likelihoods. Unlike approximate Bayesian computation, SNPE techniques learn the posterior from sequential simulation using neural network-based conditional density estimators by minimizing a specific loss function. The SNPE method proposed by Lueckmann et al. (2017) used a calibration kernel to boost the sample weights around the observed data, resulting in a concentrated loss function. However, the use of calibration kernels may increase the variances of both the empirical loss and its gradient, making the training inefficient. To improve the stability of SNPE, this paper proposes to use an adaptive calibration kernel and several variance reduction techniques. The proposed method greatly speeds up the process of training and provides a better approximation of the posterior than the original SNPE method and some existing competitors as confirmed by numerical experiments. We also managed to demonstrate the superiority of the proposed method for a high-dimensional model with a real-world dataset.
Problem

Research questions and friction points this paper is trying to address.

SNPE
Bayesian Inference
High-Dimensional Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Improved SNPE
High-dimensional Model Processing
Enhanced Bayesian Inference
🔎 Similar Papers
No similar papers found.
Y
Yifei Xiong
Department of Statistics, Purdue University, West Lafayette, United States
Xiliang Yang
Xiliang Yang
PhD students, Nanyang Technical University, CCDS
Bayesian inferencedifferential privacypreference optimizationoptimization
S
Sanguo Zhang
School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing, China
Z
Zhijian He
School of Mathematics, South China University of Technology, Guangzhou, China