Detecting Model Misspecification in Bayesian Inverse Problems via Variational Gradient Descent

📅 2025-12-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In Bayesian inverse problems, model misspecification induces severe inference bias, yet effective automated diagnostic tools remain lacking. This paper proposes a prediction-oriented mixture posterior framework for detecting model misspecification: an entropy-regularized mixture posterior is efficiently computed via variational gradient descent and systematically compared against the standard Bayesian posterior; the quantified discrepancy serves as a diagnostic metric for misspecification. The approach requires no assumption about the true data-generating model and offers both theoretical interpretability and computational scalability. In synthetic experiments and real-world seismic inversion tasks, the method successfully identifies structured model deficiencies—including prior–likelihood mismatch and likelihood mis-specification—outperforming conventional posterior predictive checks. To our knowledge, this work establishes the first differentiable, optimization-enabled, end-to-end diagnostic paradigm for Bayesian inverse problems.

Technology Category

Application Category

📝 Abstract
Bayesian inference is optimal when the statistical model is well-specified, while outside this setting Bayesian inference can catastrophically fail; accordingly a wealth of post-Bayesian methodologies have been proposed. Predictively oriented (PrO) approaches lift the statistical model $P_θ$ to an (infinite) mixture model $int P_θ; mathrm{d}Q(θ)$ and fit this predictive distribution via minimising an entropy-regularised objective functional. In the well-specified setting one expects the mixing distribution $Q$ to concentrate around the true data-generating parameter in the large data limit, while such singular concentration will typically not be observed if the model is misspecified. Our contribution is to demonstrate that one can empirically detect model misspecification by comparing the standard Bayesian posterior to the PrO `posterior' $Q$. To operationalise this, we present an efficient numerical algorithm based on variational gradient descent. A simulation study, and a more detailed case study involving a Bayesian inverse problem in seismology, confirm that model misspecification can be automatically detected using this framework.
Problem

Research questions and friction points this paper is trying to address.

Detect model misspecification in Bayesian inference
Compare Bayesian posterior with predictive-oriented posterior
Use variational gradient descent for efficient detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Detect model misspecification via variational gradient descent
Compare standard Bayesian posterior with predictive-oriented posterior
Apply entropy-regularized objective to lift statistical model
Q
Qingyang Liu
Newcastle University, UK
M
Matthew A. Fisher
Newcastle University, UK
Z
Zheyang Shen
Newcastle University, UK
Katy Tant
Katy Tant
University of Glasgow, UK
X
Xuebin Zhao
University of Edinburgh, UK
A
Andrew Curtis
University of Edinburgh, UK
Chris. J. Oates
Chris. J. Oates
Newcastle University
Statistics