JUCAL: Jointly Calibrating Aleatoric and Epistemic Uncertainty in Classification Tasks

📅 2026-02-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing calibration methods struggle to simultaneously account for aleatoric uncertainty (arising from label noise) and epistemic uncertainty (stemming from model uncertainty), often resulting in distorted confidence estimates. This work proposes JUCAL, the first algorithm to jointly calibrate both uncertainty types by learning two scalar parameters on a validation set to weight and scale their fusion. JUCAL is model-agnostic—it can be applied to any ensemble of pre-trained classifiers without requiring access to internal model parameters—and incurs minimal computational overhead. Empirical results across diverse text classification benchmarks demonstrate that JUCAL significantly outperforms state-of-the-art approaches, reducing negative log-likelihood and prediction set size by up to 15% and 20%, respectively. Notably, an ensemble of only five models calibrated with JUCAL surpasses temperature scaling with fifty models, achieving a tenfold reduction in inference cost.

Technology Category

Application Category

📝 Abstract
We study post-calibration uncertainty for trained ensembles of classifiers. Specifically, we consider both aleatoric (label noise) and epistemic (model) uncertainty. Among the most popular and widely used calibration methods in classification are temperature scaling (i.e., pool-then-calibrate) and conformal methods. However, the main shortcoming of these calibration methods is that they do not balance the proportion of aleatoric and epistemic uncertainty. Not balancing these uncertainties can severely misrepresent predictive uncertainty, leading to overconfident predictions in some input regions while being underconfident in others. To address this shortcoming, we present a simple but powerful calibration algorithm Joint Uncertainty Calibration (JUCAL) that jointly calibrates aleatoric and epistemic uncertainty. JUCAL jointly calibrates two constants to weight and scale epistemic and aleatoric uncertainties by optimizing the negative log-likelihood (NLL) on the validation/calibration dataset. JUCAL can be applied to any trained ensemble of classifiers (e.g., transformers, CNNs, or tree-based methods), with minimal computational overhead, without requiring access to the models' internal parameters. We experimentally evaluate JUCAL on various text classification tasks, for ensembles of varying sizes and with different ensembling strategies. Our experiments show that JUCAL significantly outperforms SOTA calibration methods across all considered classification tasks, reducing NLL and predictive set size by up to 15% and 20%, respectively. Interestingly, even applying JUCAL to an ensemble of size 5 can outperform temperature-scaled ensembles of size up to 50 in terms of NLL and predictive set size, resulting in up to 10 times smaller inference costs. Thus, we propose JUCAL as a new go-to method for calibrating ensembles in classification.
Problem

Research questions and friction points this paper is trying to address.

aleatoric uncertainty
epistemic uncertainty
uncertainty calibration
classification
ensemble methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

joint uncertainty calibration
aleatoric uncertainty
epistemic uncertainty
ensemble calibration
negative log-likelihood
🔎 Similar Papers
No similar papers found.