Predictively Oriented Posteriors

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Accurately quantifying predictive uncertainty under model misspecification remains challenging. Method: We propose Prediction-oriented Posterior (PrO), a unified framework that integrates the strengths of parameter inference and density estimation, constructing the posterior distribution explicitly to optimize predictive performance. PrO converges at rate $n^{-1/2}$ to the true parameter under correct specification, while automatically collapsing to the optimal predictive distribution under misspecification—thereby explicitly disentangling irreducible uncertainty from model inadequacy. Efficient sampling is achieved via mean-field Langevin dynamics. Contribution/Results: We establish asymptotic superiority of PrO over both classical and generalized Bayesian methods. Extensive numerical experiments demonstrate its robust predictive gains and practical utility across regression, classification, and generative modeling tasks.

Technology Category

Application Category

📝 Abstract
We advocate for a new statistical principle that combines the most desirable aspects of both parameter inference and density estimation. This leads us to the predictively oriented (PrO) posterior, which expresses uncertainty as a consequence of predictive ability. Doing so leads to inferences which predictively dominate both classical and generalised Bayes posterior predictive distributions: up to logarithmic factors, PrO posteriors converge to the predictively optimal model average at rate $n^{-1/2}$. Whereas classical and generalised Bayes posteriors only achieve this rate if the model can recover the data-generating process, PrO posteriors adapt to the level of model misspecification. This means that they concentrate around the true model at rate $n^{1/2}$ in the same way as Bayes and Gibbs posteriors if the model can recover the data-generating distribution, but do extit{not} concentrate in the presence of non-trivial forms of model misspecification. Instead, they stabilise towards a predictively optimal posterior whose degree of irreducible uncertainty admits an interpretation as the degree of model misspecification -- a sharp contrast to how Bayesian uncertainty and its existing extensions behave. Lastly, we show that PrO posteriors can be sampled from by evolving particles based on mean field Langevin dynamics, and verify the practical significance of our theoretical developments on a number of numerical examples.
Problem

Research questions and friction points this paper is trying to address.

Proposing predictively oriented posteriors for statistical inference
Addressing model misspecification in Bayesian predictive distributions
Achieving predictive optimality through adaptive posterior convergence rates
Innovation

Methods, ideas, or system contributions that make the work stand out.

PrO posteriors optimize predictive ability over parameter inference
PrO posteriors adapt to model misspecification levels automatically
PrO posteriors sampled via mean field Langevin dynamics
🔎 Similar Papers
No similar papers found.
Yann McLatchie
Yann McLatchie
University College London
Bayesian statistics
B
Badr-Eddine Cherief-Abdellatif
CNRS, Sorbonne Université
D
David T. Frazier
Department of Econometrics and Business Statistics, Monash University
Jeremias Knoblauch
Jeremias Knoblauch
Associate professor & EPSRC Fellow @ University College London
post-Bayesian inferencegeneralised Bayesrobustnessvariational methods