🤖 AI Summary
Existing GNN-based uncertainty quantification methods—such as Bayesian inference and ensemble learning—often fail to reliably characterize epistemic uncertainty in out-of-distribution (OOD) node classification, especially on heterogeneous graphs. To address this, we propose Credible Graph Neural Networks (CGNN), the first framework to introduce credal set modeling into graph learning, enabling explicit, set-valued predictions that capture epistemic uncertainty. CGNN innovatively leverages inter-layer information complementarity in graph message passing to design a heterogeneity-aware credible learning strategy, and theoretically and empirically reveals the critical role of graph homophily in uncertainty estimation. Evaluated on multiple heterogeneous graph benchmarks, CGNN significantly improves calibration and robustness of uncertainty estimates under OOD conditions, achieving state-of-the-art performance in both uncertainty quantification and downstream classification tasks.
📝 Abstract
Uncertainty quantification is essential for deploying reliable Graph Neural Networks (GNNs), where existing approaches primarily rely on Bayesian inference or ensembles. In this paper, we introduce the first credal graph neural networks (CGNNs), which extend credal learning to the graph domain by training GNNs to output set-valued predictions in the form of credal sets. To account for the distinctive nature of message passing in GNNs, we develop a complementary approach to credal learning that leverages different aspects of layer-wise information propagation. We assess our approach on uncertainty quantification in node classification under out-of-distribution conditions. Our analysis highlights the critical role of the graph homophily assumption in shaping the effectiveness of uncertainty estimates. Extensive experiments demonstrate that CGNNs deliver more reliable representations of epistemic uncertainty and achieve state-of-the-art performance under distributional shift on heterophilic graphs.