🤖 AI Summary
Bayesian posterior inference faces fundamental challenges in quantifying uncertainty analytically for infinite-dimensional or large combinatorial parameter spaces.
Method: We propose a general-purpose posterior inference framework that reformulates posterior inference as a prediction problem over the parameter space, leveraging conformal prediction to deliver distribution-free, statistically rigorous coverage guarantees. To capture multimodal posterior structure, we construct a kernel density score using posterior Monte Carlo samples and a parameter dissimilarity metric, then integrate nonparametric density estimation with clustering for interpretable mode identification.
Contribution/Results: The method jointly yields statistically valid point estimates and confidence regions with finite-sample coverage guarantees. Extensive experiments—including simulations and real-data analyses under stochastic block models—demonstrate strong interpretability, scalability to high-dimensional and combinatorial settings, and robust cross-scenario applicability.
📝 Abstract
Bayesian posterior distributions naturally represent parameter uncertainty informed by data. However, when the parameter space is complex, as in many nonparametric settings where it is infinite dimensional or combinatorially large, standard summaries such as posterior means, credible intervals, or simple notions of multimodality are often unavailable, hindering interpretable posterior uncertainty quantification. We introduce Conformalized Bayesian Inference (CBI), a broadly applicable and computationally efficient framework for posterior inference on nonstandard parameter spaces. CBI yields a point estimate, a credible region with assumption-free posterior coverage guarantees, and a principled analysis of posterior multimodality, requiring only Monte Carlo samples from the posterior and a notion of discrepancy between parameters. The method builds a discrepancy-based kernel density score for each parameter value, yielding a maximum-a-posteriori-like point estimate and a credible region derived from conformal prediction principles. The key conceptual step underlying this construction is the reinterpretation of posterior inference as prediction on the parameter space. A final density-based clustering step identifies representative posterior modes. We investigate a number of theoretical and methodological properties of CBI and demonstrate its practicality, scalability, and versatility in simulated and real data clustering applications with Bayesian random partition models.