🤖 AI Summary
Existing conformal prediction methods output only prediction sets that cover the true label, without providing calibrated probability estimates for individual candidate labels within the set. This work proposes Input-Adaptive Temperature Scaling (IATS), the first method to assign single-label calibrated probabilities over conformal prediction sets while strictly preserving the prescribed coverage guarantee. Its core innovation is an input-dependent temperature parameter that dynamically calibrates model logits, ensuring marginal calibration of predicted probabilities over the adaptive set. Evaluated on multiple image classification benchmarks, IATS reduces Expected Calibration Error (ECE) by 30–50% compared to baselines, while exactly maintaining target coverage levels (e.g., 90%). By simultaneously achieving reliability (valid coverage) and discriminativeness (sharp, calibrated probabilities), IATS provides an interpretable and verifiable probabilistic interface for uncertainty quantification.
📝 Abstract
Conformal prediction enables the construction of high-coverage prediction sets for any pre-trained model, guaranteeing that the true label lies within the set with a specified probability. However, these sets do not provide probability estimates for individual labels, limiting their practical use. In this paper, we propose, to the best of our knowledge, the first method for assigning calibrated probabilities to elements of a conformal prediction set. Our approach frames this as an adaptive calibration problem, selecting an input-specific temperature parameter to match the desired coverage level. Experiments on several challenging image classification datasets demonstrate that our method maintains coverage guarantees while significantly reducing expected calibration error.