Leveraging Nested MLMC for Sequential Neural Posterior Estimation with Intractable Likelihoods

📅 2024-01-30
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional automatic posterior transformation (APT) methods struggle with simulation-based models having intractable likelihoods, as they require estimating nested expectations involving intractable normalizing constants—hindering rigorous convergence analysis. Method: We propose a nested APT framework that decomposes posterior learning into a sequence of analyzable estimation subproblems and introduces a truncated unbiased multilevel Monte Carlo (MLMC) estimator to efficiently approximate the nested expectations, theoretically balancing bias and computational cost. Contribution/Results: This work establishes the first provably convergent neural posterior estimator for APT under intractable likelihoods. Experiments on moderate-dimensional, multimodal posterior inference tasks demonstrate significant improvements in estimation accuracy and stability, empirically validating both the theoretical convergence guarantees and the effectiveness of variance control via the MLMC estimator.

Technology Category

Application Category

📝 Abstract
Sequential neural posterior estimation (SNPE) techniques have been recently proposed for dealing with simulation-based models with intractable likelihoods. They are devoted to learning the posterior from adaptively proposed simulations using neural network-based conditional density estimators. As a SNPE technique, the automatic posterior transformation (APT) method proposed by Greenberg et al. (2019) performs notably and scales to high dimensional data. However, the APT method bears the computation of an expectation of the logarithm of an intractable normalizing constant, i.e., a nested expectation. Although atomic APT was proposed to solve this by discretizing the normalizing constant, it remains challenging to analyze the convergence of learning. In this paper, we propose a nested APT method to estimate the involved nested expectation instead. This facilitates establishing the convergence analysis. Since the nested estimators for the loss function and its gradient are biased, we make use of unbiased multi-level Monte Carlo (MLMC) estimators for debiasing. To further reduce the excessive variance of the unbiased estimators, this paper also develops some truncated MLMC estimators by taking account of the trade-off between the bias and the average cost. Numerical experiments for approximating complex posteriors with multimodal in moderate dimensions are provided.
Problem

Research questions and friction points this paper is trying to address.

Estimating posterior with intractable likelihoods efficiently
Debiasing nested expectation in APT method
Reducing variance in MLMC estimators for convergence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Nested APT method for estimating nested expectations
Unbiased MLMC estimators for debiasing
Truncated MLMC estimators to reduce variance
🔎 Similar Papers
No similar papers found.
Xiliang Yang
Xiliang Yang
PhD students, Nanyang Technical University, CCDS
Bayesian inferencedifferential privacypreference optimizationoptimization
Y
Yifei Xiong
School of Mathematical Sciences, University of Chinese Academy of Sciences
Z
Zhijian He
School of Mathematics, South China University of Technology