🤖 AI Summary
Graph Neural Networks (GNNs) suffer from prediction uncertainty arising from data noise, model misspecification, and other factors, undermining their reliability in critical tasks such as node classification and link prediction. To address this, we propose the first unified uncertainty taxonomy specifically designed for GNNs, systematically categorizing uncertainty sources across three levels: data, model, and inference. Our framework integrates established uncertainty modeling techniques—including Bayesian approximate inference, Monte Carlo DropPath, ensemble learning, confidence calibration, and information-theoretic entropy measures. Based on a comprehensive analysis of over 120 papers, we establish a holistic evaluation paradigm bridging theoretical foundations and empirical practice, thereby reconciling methodological fragmentation across GNN subfields. We further identify six key open challenges, offering a principled methodology guide and reproducible benchmark suite for trustworthy graph learning.
📝 Abstract
Graph Neural Networks (GNNs) have been extensively used in various real-world applications. However, the predictive uncertainty of GNNs stemming from diverse sources such as inherent randomness in data and model training errors can lead to unstable and erroneous predictions. Therefore, identifying, quantifying, and utilizing uncertainty are essential to enhance the performance of the model for the downstream tasks as well as the reliability of the GNN predictions. This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty with an emphasis on its integration in graph learning. We compare and summarize existing graph uncertainty theory and methods, alongside the corresponding downstream tasks. Thereby, we bridge the gap between theory and practice, meanwhile connecting different GNN communities. Moreover, our work provides valuable insights into promising directions in this field.