Sequential Gibbs Posteriors with Applications to Principal Component Analysis

📅 2023-10-19
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Standard Gibbs posteriors suffer from unreliable uncertainty quantification—specifically, frequentist coverage of credible intervals severely deviates from nominal levels and fails to improve with increasing sample size—due to reliance on a single temperature parameter. Method: We propose a sequential Gibbs posterior framework that incrementally incorporates loss information across stages, enhancing inferential reliability. Methodologically, we integrate nonconvex loss modeling with asymptotic statistical theory on manifolds. Contributions/Results: Theoretically, we establish the first concentration properties and Bernstein–von Mises (BvM) theorem for sequential Gibbs posteriors; further, we derive the first BvM result for likelihood-based Bayesian posteriors on manifolds. In principal component analysis (PCA), our framework substantially improves frequentist coverage of credible intervals while rigorously guaranteeing asymptotic normality and optimal convergence rates in large samples.
📝 Abstract
Gibbs posteriors are proportional to a prior distribution multiplied by an exponentiated loss function, with a key tuning parameter weighting information in the loss relative to the prior and providing a control of posterior uncertainty. Gibbs posteriors provide a principled framework for likelihood-free Bayesian inference, but in many situations, including a single tuning parameter inevitably leads to poor uncertainty quantification. In particular, regardless of the value of the parameter, credible regions have far from the nominal frequentist coverage even in large samples. We propose a sequential extension to Gibbs posteriors to address this problem. We prove the proposed sequential posterior exhibits concentration and a Bernstein-von Mises theorem, which holds under easy to verify conditions in Euclidean space and on manifolds. As a byproduct, we obtain the first Bernstein-von Mises theorem for traditional likelihood-based Bayesian posteriors on manifolds. All methods are illustrated with an application to principal component analysis.
Problem

Research questions and friction points this paper is trying to address.

Addressing poor uncertainty quantification in Gibbs posteriors
Proposing sequential extension to improve frequentist coverage properties
Validating method with theoretical guarantees and PCA application
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sequential Gibbs posteriors extension for uncertainty calibration
Bernstein-von Mises theorem proof on manifolds and Euclidean space
Application to principal component analysis with improved coverage
🔎 Similar Papers
No similar papers found.
S
Steven Winter
Department of Statistical Science, Duke University, Durham, North Carolina
Omar Melikechi
Omar Melikechi
Duke University
statisticsbiostatisticsmachine learning
D
David B. Dunson
Department of Statistical Science, Duke University, Durham, North Carolina; Department of Mathematics, Duke University, Durham, North Carolina