๐ค AI Summary
Graph neural networks (GNNs) commonly suffer from poor prediction calibration and limited interpretability, hindering their deployment in safety-critical applications. To address this, we propose Graph-Neural Process (GNP), a unified framework integrating graph-functional neural processes with graph generative modeling. GNP implicitly quantifies predictive uncertainty via a learnable stochastic correlation matrix and an interpretable latent rationale space, while explicitly decoding structured, human-understandable rationales to enhance transparency. It is architecture-agnostic, supporting plug-and-play integration with arbitrary GNN backbones. Methodologically, GNP employs probabilistic embedding space modeling and an alternating optimization strategy inspired by the EM algorithm. Evaluated on five graph classification benchmarks, GNP achieves substantial improvements over state-of-the-art methodsโreducing expected calibration error (ECE) by 32โ58% and increasing rationale credibility (per human evaluation) by 41%. These results empirically validate the synergistic enhancement of uncertainty quantification and interpretability.
๐ Abstract
Graph neural networks (GNNs) are powerful tools on graph data. However, their predictions are mis-calibrated and lack interpretability, limiting their adoption in critical applications. To address this issue, we propose a new uncertainty-aware and interpretable graph classification model that combines graph functional neural process and graph generative model. The core of our method is to assume a set of latent rationales which can be mapped to a probabilistic embedding space; the predictive distribution of the classifier is conditioned on such rationale embeddings by learning a stochastic correlation matrix. The graph generator serves to decode the graph structure of the rationales from the embedding space for model interpretability. For efficient model training, we adopt an alternating optimization procedure which mimics the well known Expectation-Maximization (EM) algorithm. The proposed method is general and can be applied to any existing GNN architecture. Extensive experiments on five graph classification datasets demonstrate that our framework outperforms state-of-the-art methods in both uncertainty quantification and GNN interpretability. We also conduct case studies to show that the decoded rationale structure can provide meaningful explanations.