🤖 AI Summary
In high-stakes graph learning tasks—such as drug discovery and financial fraud detection—pretrained graph neural networks (GNNs) lack plug-and-play capability for quantifying both epistemic and aleatoric uncertainty. To address this, we propose EPN, an evidence probing framework that requires no fine-tuning or retraining: it extracts Dirichlet evidence from pretrained GNN representations via a lightweight MLP probe head. We further introduce EPN-reg, the first evidence-based regularization method with theoretical guarantees for calibrated epistemic uncertainty estimation. EPN is architecture-agnostic and compatible with any pretrained GNN. Empirically, it achieves state-of-the-art performance in uncertainty calibration and out-of-distribution detection across multiple benchmarks, with zero additional training overhead. By enabling reliable, interpretable uncertainty quantification without modifying pretrained models, EPN significantly enhances the trustworthiness of GNN deployment in safety-critical applications.
📝 Abstract
Accurate quantification of both aleatoric and epistemic uncertainties is essential when deploying Graph Neural Networks (GNNs) in high-stakes applications such as drug discovery and financial fraud detection, where reliable predictions are critical. Although Evidential Deep Learning (EDL) efficiently quantifies uncertainty using a Dirichlet distribution over predictive probabilities, existing EDL-based GNN (EGNN) models require modifications to the network architecture and retraining, failing to take advantage of pre-trained models. We propose a plug-and-play framework for uncertainty quantification in GNNs that works with pre-trained models without the need for retraining. Our Evidential Probing Network (EPN) uses a lightweight Multi-Layer-Perceptron (MLP) head to extract evidence from learned representations, allowing efficient integration with various GNN architectures. We further introduce evidence-based regularization techniques, referred to as EPN-reg, to enhance the estimation of epistemic uncertainty with theoretical justifications. Extensive experiments demonstrate that the proposed EPN-reg achieves state-of-the-art performance in accurate and efficient uncertainty quantification, making it suitable for real-world deployment.