🤖 AI Summary
This work addresses the challenge of entangled uncertainty sources and the difficulty of disentangling pointwise statistical risk in predictive modeling. We propose a unified generative framework based on approximate Bayesian inference that, for the first time, establishes an explicit, interpretable decomposition linking pointwise statistical risk to two fundamental uncertainty types: aleatoric uncertainty (arising from inherent data noise) and epistemic uncertainty (stemming from model ignorance). The framework jointly generates multiple uncertainty measures while ensuring semantic consistency across them. Experiments on image benchmarks demonstrate significant improvements in out-of-distribution detection and misclassification identification, achieving higher AUROC scores compared to existing methods. Our approach thus provides robust, quantifiable uncertainty estimates essential for downstream uncertainty-aware tasks such as active learning, safe decision-making, and model debugging.
📝 Abstract
There are various measures of predictive uncertainty in the literature, but their relationships to each other remain unclear. This paper uses a decomposition of statistical pointwise risk into components, associated with different sources of predictive uncertainty, namely aleatoric uncertainty (inherent data variability) and epistemic uncertainty (model-related uncertainty). Together with Bayesian methods, applied as an approximation, we build a framework that allows one to generate different predictive uncertainty measures. We validate our method on image datasets by evaluating its performance in detecting out-of-distribution and misclassified instances using the AUROC metric. The experimental results confirm that the measures derived from our framework are useful for the considered downstream tasks.