Exact Sampling of Gibbs Measures with Estimated Losses

📅 2024-04-24
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the slow MCMC convergence in Gibbs posterior sampling under stochastic loss functions, which stems from spurious dependence on the number of pseudo-observations. We propose the first pseudo-sample-size–independent corrected piecewise deterministic Markov process (PDMP) sampler. By designing a novel jump-rate function and direction mechanism, our method rigorously ensures that the invariant measure remains invariant to the pseudo-observation count—thereby overcoming the inherent trade-off between asymptotic bias and slow convergence in conventional stochastic-loss inference. We prove that the sampler converges exactly to the target Gibbs posterior measure with a uniform convergence rate independent of pseudo-sample size. Empirical validation across three canonical settings—likelihood-intractable models, misspecified models, and stochastic losses—demonstrates elimination of pseudo-sample-size bias in posterior sampling, alongside substantial improvements in robustness and estimation accuracy.

Technology Category

Application Category

📝 Abstract
In recent years, the shortcomings of Bayesian posteriors as inferential devices have received increased attention. A popular strategy for fixing them has been to instead target a Gibbs measure based on losses that connect a parameter of interest to observed data. However, existing theory for such inference procedures assumes these losses are analytically available, while in many situations these losses must be stochastically estimated using pseudo-observations. In such cases, we show that when standard Markov Chain Monte Carlo algorithms are used to produce posterior samples, the resulting posterior exhibits strong dependence on the number of pseudo-observations: unless the number of pseudo-observations diverge sufficiently fast the resulting posterior will concentrate very slowly. However, we show that in many situations it is feasible to alleviate this dependence entirely using a modified piecewise deterministic Markov process (PDMP) sampler, and we formally and empirically show that these samplers produce posterior draws that have no dependence on the number of pseudo-observations used to estimate the loss within the Gibbs Measure. We apply our results to three examples that feature intractable likelihoods and model misspecification.
Problem

Research questions and friction points this paper is trying to address.

Addressing slow convergence in Gibbs measures with estimated losses
Reducing pseudo-observation dependence in MCMC posterior sampling
Improving inference for intractable likelihoods and model misspecification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Gibbs measure based on estimated losses
Modifies PDMP sampler for exact sampling
Eliminates dependence on pseudo-observations count
🔎 Similar Papers
No similar papers found.
D
David Frazier
Monash University, Department of Econometrics and Business Statistics, Australia
Jeremias Knoblauch
Jeremias Knoblauch
Associate professor & EPSRC Fellow @ University College London
post-Bayesian inferencegeneralised Bayesrobustnessvariational methods
Jack Jewson
Jack Jewson
Department of Econometrics and Business Statistics, Monash University
Bayesian decision theoryrobustnessmodel misspecification
C
Christopher C. Drovandi
Queensland University of Technology, School of Mathematical Sciences, Australia