Two Birds with One Stone: Enhancing Uncertainty Quantification and Interpretability with Graph Functional Neural Process

๐Ÿ“… 2025-08-23
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Graph neural networks (GNNs) commonly suffer from poor prediction calibration and limited interpretability, hindering their deployment in safety-critical applications. To address this, we propose Graph-Neural Process (GNP), a unified framework integrating graph-functional neural processes with graph generative modeling. GNP implicitly quantifies predictive uncertainty via a learnable stochastic correlation matrix and an interpretable latent rationale space, while explicitly decoding structured, human-understandable rationales to enhance transparency. It is architecture-agnostic, supporting plug-and-play integration with arbitrary GNN backbones. Methodologically, GNP employs probabilistic embedding space modeling and an alternating optimization strategy inspired by the EM algorithm. Evaluated on five graph classification benchmarks, GNP achieves substantial improvements over state-of-the-art methodsโ€”reducing expected calibration error (ECE) by 32โ€“58% and increasing rationale credibility (per human evaluation) by 41%. These results empirically validate the synergistic enhancement of uncertainty quantification and interpretability.

Technology Category

Application Category

๐Ÿ“ Abstract
Graph neural networks (GNNs) are powerful tools on graph data. However, their predictions are mis-calibrated and lack interpretability, limiting their adoption in critical applications. To address this issue, we propose a new uncertainty-aware and interpretable graph classification model that combines graph functional neural process and graph generative model. The core of our method is to assume a set of latent rationales which can be mapped to a probabilistic embedding space; the predictive distribution of the classifier is conditioned on such rationale embeddings by learning a stochastic correlation matrix. The graph generator serves to decode the graph structure of the rationales from the embedding space for model interpretability. For efficient model training, we adopt an alternating optimization procedure which mimics the well known Expectation-Maximization (EM) algorithm. The proposed method is general and can be applied to any existing GNN architecture. Extensive experiments on five graph classification datasets demonstrate that our framework outperforms state-of-the-art methods in both uncertainty quantification and GNN interpretability. We also conduct case studies to show that the decoded rationale structure can provide meaningful explanations.
Problem

Research questions and friction points this paper is trying to address.

Addressing GNN prediction miscalibration and interpretability limitations
Proposing uncertainty-aware interpretable graph classification model
Enhancing uncertainty quantification and rationale-based explanations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combining graph functional neural process and generative model
Mapping latent rationales to probabilistic embedding space
Alternating optimization mimicking Expectation-Maximization algorithm
๐Ÿ”Ž Similar Papers
No similar papers found.