Density-Informed Pseudo-Counts for Calibrated Evidential Deep Learning

πŸ“… 2026-02-01
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge in evidential deep learning of disentangling epistemic and aleatoric uncertainty under distributional shift, where standard approaches often exhibit overconfidence on out-of-distribution samples. The authors propose DIP-EDL, a novel method that explicitly decouples class prediction from uncertainty magnitude through a density-aware pseudo-count mechanism, modeling the conditional label distribution and marginal covariate density separately. Built upon a hierarchical Bayesian framework, DIP-EDL integrates amortized variational inference with Dirichlet parameterization to achieve, for the first time in evidential deep learning, asymptotically identifiable separation of the two uncertainty types. Experiments demonstrate that DIP-EDL significantly improves calibration, robustness, and interpretability on out-of-distribution data while preserving strong predictive performance in high-density regions.

Technology Category

Application Category

πŸ“ Abstract
Evidential Deep Learning (EDL) is a popular framework for uncertainty-aware classification that models predictive uncertainty via Dirichlet distributions parameterized by neural networks. Despite its popularity, its theoretical foundations and behavior under distributional shift remain poorly understood. In this work, we provide a principled statistical interpretation by proving that EDL training corresponds to amortized variational inference in a hierarchical Bayesian model with a tempered pseudo-likelihood. This perspective reveals a major drawback: standard EDL conflates epistemic and aleatoric uncertainty, leading to systematic overconfidence on out-of-distribution (OOD) inputs. To address this, we introduce Density-Informed Pseudo-count EDL (DIP-EDL), a new parametrization that decouples class prediction from the magnitude of uncertainty by separately estimating the conditional label distribution and the marginal covariate density. This separation preserves evidence in high-density regions while shrinking predictions toward a uniform prior for OOD data. Theoretically, we prove that DIP-EDL achieves asymptotic concentration. Empirically, we show that our method enhances interpretability and improves robustness and uncertainty calibration under distributional shift.
Problem

Research questions and friction points this paper is trying to address.

Evidential Deep Learning
distributional shift
out-of-distribution detection
uncertainty calibration
epistemic uncertainty
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evidential Deep Learning
Uncertainty Calibration
Density-Informed Pseudo-Counts
Distributional Shift
Bayesian Inference
πŸ”Ž Similar Papers
No similar papers found.
P
Pietro Carlotti
Department of Statistics and Data Sciences, The University of Texas at Austin
N
Nevena Gligić
Department of Statistics and Data Sciences, The University of Texas at Austin
Arya Farahi
Arya Farahi
University of Texas at Austin
Machine LearningStatistical InferenceAstroinformaticsTrustworthy AIExplainable AI