🤖 AI Summary
Deep learning models often produce overconfident predictions in high-stakes scenarios due to their inability to distinguish between epistemic and aleatoric uncertainty, thereby compromising decision safety. To address this, this work proposes CUPID—a plug-and-play, training-free module that enables joint estimation of both uncertainty types at any network layer while supporting layer-wise attribution. CUPID models aleatoric uncertainty through Bayesian identity mapping and captures epistemic uncertainty via structured perturbation analysis, achieving for the first time a single-model, retraining-free, and pluggable framework for unified uncertainty quantification. Experiments demonstrate its superior performance across classification, regression, and out-of-distribution detection tasks, offering fine-grained uncertainty decomposition that significantly enhances the transparency and trustworthiness of AI systems.
📝 Abstract
Accurate estimation of uncertainty in deep learning is critical for deploying models in high-stakes domains such as medical diagnosis and autonomous decision-making, where overconfident predictions can lead to harmful outcomes. In practice, understanding the reason behind a model's uncertainty and the type of uncertainty it represents can support risk-aware decisions, enhance user trust, and guide additional data collection. However, many existing methods only address a single type of uncertainty or require modifications and retraining of the base model, making them difficult to adopt in real-world systems. We introduce CUPID (Comprehensive Uncertainty Plug-in estImation moDel), a general-purpose module that jointly estimates aleatoric and epistemic uncertainty without modifying or retraining the base model. CUPID can be flexibly inserted into any layer of a pretrained network. It models aleatoric uncertainty through a learned Bayesian identity mapping and captures epistemic uncertainty by analyzing the model's internal responses to structured perturbations. We evaluate CUPID across a range of tasks, including classification, regression, and out-of-distribution detection. The results show that it consistently delivers competitive performance while offering layer-wise insights into the origins of uncertainty. By making uncertainty estimation modular, interpretable, and model-agnostic, CUPID supports more transparent and trustworthy AI. Related code and data are available at https://github.com/a-Fomalhaut-a/CUPID.