🤖 AI Summary
This work addresses the challenge that conventional Bayesian credible intervals often fail to achieve reliable predictive coverage and exhibit unstable prediction set sizes under model misspecification and distributional shift. The authors propose the first formulation of Bayesian conformal prediction as a decision-theoretic risk minimization problem, introducing the Bayesian posterior predictive density as a nonconformity score and integrating it via Bayesian numerical integration within the split conformal prediction framework to minimize expected prediction set size. Evaluated on both regression and classification tasks—including distribution-shift benchmarks such as ImageNet-A—the method achieves near-nominal 80% coverage (empirically 81%), substantially outperforming traditional Bayesian intervals (49% coverage), while significantly reducing inter-run variability in prediction set size, thereby offering both reliability and stability.
📝 Abstract
Bayesian posterior predictive densities as non-conformity scores and Bayesian quadrature are used to estimate and minimise the expected prediction set size. Operating within a split conformal framework, BCP provides valid coverage guarantees and demonstrates reliable empirical coverage under model misspecification. Across regression and classification tasks, including distribution-shifted settings such as ImageNet-A, BCP yields prediction sets of comparable size to split conformal prediction, while exhibiting substantially lower run-to-run variability in set size. In sparse regression with nominal coverage of 80 percent, BCP achieves 81 percent empirical coverage under a misspecified prior, whereas Bayesian credible intervals under-cover at 49 percent.