🤖 AI Summary
Existing graph neural network (GNN) methods predominantly focus on node- or edge-level uncertainty modeling and lack principled approaches for quantifying prediction uncertainty in graph-level learning tasks. Method: We propose the first variational framework for posterior predictive distribution modeling tailored to graph-level tasks. Our approach takes pre-trained GNN-derived graph-level embeddings as input and employs data-adaptive variational inference to explicitly capture uncertainty in graph representations, enabling uncertainty-aware graph-level predictions. Contribution/Results: This work pioneers the integration of posterior predictive distribution modeling into graph-level learning, seamlessly unifying deep representation learning with probabilistic inference. Extensive experiments on multiple standard graph classification benchmarks demonstrate that our method significantly improves uncertainty quantification accuracy, thereby enhancing model decision reliability and interpretability.
📝 Abstract
Accurate modelling and quantification of predictive uncertainty is crucial in deep learning since it allows a model to make safer decisions when the data is ambiguous and facilitates the users' understanding of the model's confidence in its predictions. Along with the tremendously increasing research focus on emph{graph neural networks} (GNNs) in recent years, there have been numerous techniques which strive to capture the uncertainty in their predictions. However, most of these approaches are specifically designed for node or link-level tasks and cannot be directly applied to graph-level learning problems. In this paper, we propose a novel variational modelling framework for the emph{posterior predictive distribution}~(PPD) to obtain uncertainty-aware prediction in graph-level learning tasks. Based on a graph-level embedding derived from one of the existing GNNs, our framework can learn the PPD in a data-adaptive fashion. Experimental results on several benchmark datasets exhibit the effectiveness of our approach.