🤖 AI Summary
This paper investigates the fundamental nature of epistemic uncertainty (EU) in neural networks under shortcut learning, addressing two open issues: (i) whether existing entropy decomposition methods adequately characterize EU, and (ii) the long-standing theoretical tension between the “ignorance” and “disagreement” interpretations of EU. Method: We introduce a controlled experimental framework integrating synthetic shortcut construction, multi-model ensembling, disagreement quantification, and entropy decomposition analysis. Contribution/Results: We empirically demonstrate that shortcut learning is a critical condition triggering model-level predictive disagreement—thereby amplifying the disagreement component of EU—whereas removing shortcuts shifts EU toward reflecting pure ignorance. Our results reveal strong contextual dependence of EU, reconciling the ignorance and disagreement perspectives for the first time. This yields an interpretable, controllable theoretical foundation and practical methodology for uncertainty quantification in deep learning.
📝 Abstract
The correct way to quantify predictive uncertainty in neural networks remains a topic of active discussion. In particular, it is unclear whether the state-of-the art entropy decomposition leads to a meaningful representation of model, or epistemic, uncertainty (EU) in the light of a debate that pits ignorance against disagreement perspectives. We aim to reconcile the conflicting viewpoints by arguing that both are valid but arise from different learning situations. Notably, we show that the presence of shortcuts is decisive for EU manifesting as disagreement.