Evidential Uncertainty Probes for Graph Neural Networks

📅 2025-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In high-stakes graph learning tasks—such as drug discovery and financial fraud detection—pretrained graph neural networks (GNNs) lack plug-and-play capability for quantifying both epistemic and aleatoric uncertainty. To address this, we propose EPN, an evidence probing framework that requires no fine-tuning or retraining: it extracts Dirichlet evidence from pretrained GNN representations via a lightweight MLP probe head. We further introduce EPN-reg, the first evidence-based regularization method with theoretical guarantees for calibrated epistemic uncertainty estimation. EPN is architecture-agnostic and compatible with any pretrained GNN. Empirically, it achieves state-of-the-art performance in uncertainty calibration and out-of-distribution detection across multiple benchmarks, with zero additional training overhead. By enabling reliable, interpretable uncertainty quantification without modifying pretrained models, EPN significantly enhances the trustworthiness of GNN deployment in safety-critical applications.

Technology Category

Application Category

📝 Abstract
Accurate quantification of both aleatoric and epistemic uncertainties is essential when deploying Graph Neural Networks (GNNs) in high-stakes applications such as drug discovery and financial fraud detection, where reliable predictions are critical. Although Evidential Deep Learning (EDL) efficiently quantifies uncertainty using a Dirichlet distribution over predictive probabilities, existing EDL-based GNN (EGNN) models require modifications to the network architecture and retraining, failing to take advantage of pre-trained models. We propose a plug-and-play framework for uncertainty quantification in GNNs that works with pre-trained models without the need for retraining. Our Evidential Probing Network (EPN) uses a lightweight Multi-Layer-Perceptron (MLP) head to extract evidence from learned representations, allowing efficient integration with various GNN architectures. We further introduce evidence-based regularization techniques, referred to as EPN-reg, to enhance the estimation of epistemic uncertainty with theoretical justifications. Extensive experiments demonstrate that the proposed EPN-reg achieves state-of-the-art performance in accurate and efficient uncertainty quantification, making it suitable for real-world deployment.
Problem

Research questions and friction points this paper is trying to address.

Quantify uncertainty in Graph Neural Networks for high-stakes applications.
Enable uncertainty estimation without retraining pre-trained GNN models.
Enhance epistemic uncertainty estimation using evidence-based regularization techniques.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Plug-and-play framework for GNN uncertainty quantification
Lightweight MLP head extracts evidence from representations
Evidence-based regularization enhances epistemic uncertainty estimation
🔎 Similar Papers
Linlin Yu
Linlin Yu
University of Texas at Dallas
Uncertainty EstimationTrustworthy AIGraph Neural NetworkNLP
K
Kangshuo Li
The University of Texas at Dallas
P
P. Saha
The University of Texas at Dallas
Yifei Lou
Yifei Lou
University of North Carolina at Chapel Hill
Image processingcompressive sensing
F
Feng Chen
The University of Texas at Dallas