🤖 AI Summary
In high-stakes settings requiring uncertainty quantification for black-box models, conventional frequentist conformal prediction provides only a single marginal coverage guarantee, lacking semantic characterization of the loss distribution.
Method: This paper reformulates conformal prediction as a Bayesian numerical integration problem and proposes a Bayesian quadrature–based uncertainty quantification framework. It employs nonparametric kernel embedding to model the posterior distribution of conformity scores, enabling distribution-free estimation of the predictive loss posterior at test time.
Contribution/Results: The approach yields compact, discriminative prediction intervals while delivering the full predictive loss probability distribution and calibrated quantile guarantees. Extensive evaluation on multiple benchmark datasets demonstrates statistically superior coverage calibration, tighter intervals, and enhanced interpretability compared to standard conformal prediction.
📝 Abstract
As machine learning-based prediction systems are increasingly used in high-stakes situations, it is important to understand how such predictive models will perform upon deployment. Distribution-free uncertainty quantification techniques such as conformal prediction provide guarantees about the loss black-box models will incur even when the details of the models are hidden. However, such methods are based on frequentist probability, which unduly limits their applicability. We revisit the central aspects of conformal prediction from a Bayesian perspective and thereby illuminate the shortcomings of frequentist guarantees. We propose a practical alternative based on Bayesian quadrature that provides interpretable guarantees and offers a richer representation of the likely range of losses to be observed at test time.