š¤ AI Summary
To address the lack of a universally optimal solution for uncertainty quantification in safety-critical machine learning applications, this paper proposes a task-driven, customized uncertainty modeling paradigm. Methodologically, we introduce a configurable second-order distributional framework that decomposes total uncertainty into aleatoric and epistemic components, and establish principled alignment rules between scoring metrics (e.g., log score, zero-one lossāderived measures) and downstream tasksāselective prediction, out-of-distribution detection, and active learning. Our key contributions are threefold: (i) the first systematic theoretical demonstration that āno single uncertainty metric is universally optimalā; (ii) task-specific theoretical optimality guaranteesācalibration-optimal selective prediction achieving alignment with task loss; (iii) mutual informationābased out-of-distribution detection attaining provable optimality; and (iv) zero-one lossāguided epistemic uncertainty estimation significantly improving sample efficiency in active learning.
š Abstract
Proper quantification of predictive uncertainty is essential for the use of machine learning in safety-critical applications. Various uncertainty measures have been proposed for this purpose, typically claiming superiority over other measures. In this paper, we argue that there is no single best measure. Instead, uncertainty quantification should be tailored to the specific application. To this end, we use a flexible family of uncertainty measures that distinguishes between total, aleatoric, and epistemic uncertainty of second-order distributions. These measures can be instantiated with specific loss functions, so-called proper scoring rules, to control their characteristics, and we show that different characteristics are useful for different tasks. In particular, we show that, for the task of selective prediction, the scoring rule should ideally match the task loss. On the other hand, for out-of-distribution detection, our results confirm that mutual information, a widely used measure of epistemic uncertainty, performs best. Furthermore, in an active learning setting, epistemic uncertainty based on zero-one loss is shown to consistently outperform other uncertainty measures.