🤖 AI Summary
Epistemic uncertainty remains challenging to quantify in predictive uncertainty decomposition—especially when the posterior distribution is intractable.
Method: We propose a frequentist epistemic uncertainty measure based on bootstrap resampling, establishing its asymptotic equivalence to Bayesian mutual information under large-sample conditions. This approach bypasses Bayesian inference entirely, relying solely on resampling and asymptotic expansion to efficiently approximate mutual information.
Contribution/Results: Our method provides the first rigorous theoretical foundation and computationally tractable pathway for frequentist estimation of epistemic uncertainty. It formally justifies empirical practices such as deep ensembles by revealing their implicit modeling of epistemic uncertainty. Extensive experiments demonstrate that the proposed measure improves uncertainty calibration and enhances decision-making performance in downstream tasks.
📝 Abstract
Decomposing prediction uncertainty into its aleatoric (irreducible) and epistemic (reducible) components is critical for the development and deployment of machine learning systems. A popular, principled measure for epistemic uncertainty is the mutual information between the response variable and model parameters. However, evaluating this measure requires access to the posterior distribution of the model parameters, which is challenging to compute. In view of this, we introduce a frequentist measure of epistemic uncertainty based on the bootstrap. Our main theoretical contribution is a novel asymptotic expansion that reveals that our proposed (frequentist) measure and the (Bayesian) mutual information are asymptotically equivalent. This provides frequentist interpretations to mutual information and new computational strategies for approximating it. Moreover, we link our proposed approach to the widely-used heuristic approach of deep ensembles, giving added perspective on their practical success.