🤖 AI Summary
This work addresses the low efficiency and poor calibration of epistemic uncertainty estimation in Bayesian neural networks (BNNs) for classification tasks. We propose Epistemic Wrapping, the first method to incorporate belief functions from Dempster–Shafer evidence theory into BNN posterior modeling. It employs a differentiable posterior transformation to map BNN outputs to belief distributions, yielding a model-agnostic, plug-and-play framework for uncertainty enhancement. Crucially, it imposes no constraints on network architecture or training paradigm. Evaluated on MNIST, Fashion-MNIST, and CIFAR, Epistemic Wrapping significantly improves uncertainty calibration—reducing expected calibration error (ECE) by 37%—and out-of-distribution (OOD) detection performance—increasing area under the ROC curve (AUC) by 5.2%. Moreover, it enhances generalization robustness across diverse benchmarks.
📝 Abstract
Uncertainty estimation is pivotal in machine learning, especially for classification tasks, as it improves the robustness and reliability of models. We introduce a novel `Epistemic Wrapping' methodology aimed at improving uncertainty estimation in classification. Our approach uses Bayesian Neural Networks (BNNs) as a baseline and transforms their outputs into belief function posteriors, effectively capturing epistemic uncertainty and offering an efficient and general methodology for uncertainty quantification. Comprehensive experiments employing a Bayesian Neural Network (BNN) baseline and an Interval Neural Network for inference on the MNIST, Fashion-MNIST, CIFAR-10 and CIFAR-100 datasets demonstrate that our Epistemic Wrapper significantly enhances generalisation and uncertainty quantification.