Efficient Credal Prediction through Decalibration

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of applying traditional conformal prediction methods to foundation models and multimodal systems, which is hindered by their computational complexity. The authors propose an efficient, recalibration-free conformal prediction approach that constructs probability intervals for each class based on relative likelihoods to capture epistemic uncertainty. Notably, this method requires neither model retraining nor ensembling and, for the first time, enables valid conformal sets for advanced models such as TabPFN and CLIP. Experimental results demonstrate its strong performance in terms of coverage efficiency, out-of-distribution detection, and in-context learning, significantly expanding the applicability of conformal prediction to complex, modern architectures.

Technology Category

Application Category

📝 Abstract
A reliable representation of uncertainty is essential for the application of modern machine learning methods in safety-critical settings. In this regard, the use of credal sets (i.e., convex sets of probability distributions) has recently been proposed as a suitable approach to representing epistemic uncertainty. However, as with other approaches to epistemic uncertainty, training credal predictors is computationally complex and usually involves (re-)training an ensemble of models. The resulting computational complexity prevents their adoption for complex models such as foundation models and multi-modal systems. To address this problem, we propose an efficient method for credal prediction that is grounded in the notion of relative likelihood and inspired by techniques for the calibration of probabilistic classifiers. For each class label, our method predicts a range of plausible probabilities in the form of an interval. To produce the lower and upper bounds of these intervals, we propose a technique that we refer to as decalibration. Extensive experiments show that our method yields credal sets with strong performance across diverse tasks, including coverage-efficiency evaluation, out-of-distribution detection, and in-context learning. Notably, we demonstrate credal prediction on models such as TabPFN and CLIP -- architectures for which the construction of credal sets was previously infeasible.
Problem

Research questions and friction points this paper is trying to address.

credal prediction
epistemic uncertainty
computational complexity
foundation models
calibration
Innovation

Methods, ideas, or system contributions that make the work stand out.

credal prediction
decalibration
epistemic uncertainty
relative likelihood
foundation models
🔎 Similar Papers
No similar papers found.