🤖 AI Summary
This work addresses epistemic predictive uncertainty (EPU) in conformal prediction arising from the coexistence of multiple plausible models by proposing a fine-grained quantification method. It reveals the implicit model multiplicity inherent in conformal prediction and constructs confidence sets composed of closed convex predictive distributions, thereby integrating split conformal prediction with confidence set theory for the first time. Building on this foundation, the authors introduce an efficient and analytically tractable EPU metric based on the maximum mean imprecision. Experimental results demonstrate that, in active learning and selective classification tasks, this metric yields richer and more nuanced uncertainty assessments compared to conventional approaches that rely solely on the size of prediction regions.
📝 Abstract
We study the problem of quantifying epistemic predictive uncertainty (EPU) -- that is, uncertainty faced at prediction time due to the existence of multiple plausible predictive models -- within the framework of conformal prediction (CP). To expose the implicit model multiplicity underlying CP, we build on recent results showing that, under a mild assumption, any full CP procedure induces a set of closed and convex predictive distributions, commonly referred to as a credal set. Importantly, the conformal prediction region (CPR) coincides exactly with the set of labels to which all distributions in the induced credal set assign probability at least $1-\alpha$. As our first contribution, we prove that this characterisation also holds in split CP. Building on this connection, we then propose a computationally efficient and analytically tractable uncertainty measure, based on \emph{Maximum Mean Imprecision}, to quantify the EPU by measuring the degree of conflicting information within the induced credal set. Experiments on active learning and selective classification demonstrate that the quantified EPU provides substantially more informative and fine-grained uncertainty assessments than reliance on CPR size alone. More broadly, this work highlights the potential of CP serving as a principled basis for decision-making under epistemic uncertainty.