Enhancing Trustworthiness of Graph Neural Networks with Rank-Based Conformal Training

📅 2025-01-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph Neural Networks (GNNs) often yield unreliable uncertainty estimates in high-stakes applications, lacking statistical guarantees on prediction coverage. Method: This paper proposes RCP-GNN, a ranking-based conformal prediction framework integrated into GNN training. Its core innovation is the first introduction of a differentiable ranking-aware conformal loss into the GNN joint optimization objective, enabling simultaneous improvement of prediction set efficiency (i.e., average size) and marginal coverage calibration, while dynamically adapting to graph topology. Results: Evaluated on multiple real-world graph datasets, RCP-GNN consistently achieves user-specified marginal coverage levels across diverse settings, with prediction sets substantially smaller than those of state-of-the-art baselines. It delivers statistically rigorous and practically viable uncertainty quantification for node classification—ensuring validity without sacrificing predictive efficiency.

Technology Category

Application Category

📝 Abstract
Graph Neural Networks (GNNs) has been widely used in a variety of fields because of their great potential in representing graph-structured data. However, lacking of rigorous uncertainty estimations limits their application in high-stakes. Conformal Prediction (CP) can produce statistically guaranteed uncertainty estimates by using the classifier's probability estimates to obtain prediction sets, which contains the true class with a user-specified probability. In this paper, we propose a Rank-based CP during training framework to GNNs (RCP-GNN) for reliable uncertainty estimates to enhance the trustworthiness of GNNs in the node classification scenario. By exploiting rank information of the classifier's outcome, prediction sets with desired coverage rate can be efficiently constructed. The strategy of CP during training with differentiable rank-based conformity loss function is further explored to adapt prediction sets according to network topology information. In this way, the composition of prediction sets can be guided by the goal of jointly reducing inefficiency and probability estimation errors. Extensive experiments on several real-world datasets show that our model achieves any pre-defined target marginal coverage while significantly reducing the inefficiency compared with state-of-the-art methods.
Problem

Research questions and friction points this paper is trying to address.

Graph Neural Networks
Uncertainty Estimation
Reliability
Innovation

Methods, ideas, or system contributions that make the work stand out.

RCP-GNN
Ranking-based Approach
Adjusted Loss Function
🔎 Similar Papers
No similar papers found.
T
Ting Wang
Department of Systems Engineering, City University of Hong Kong, Hong Kong SAR, China
Zhixin Zhou
Zhixin Zhou
Alpha Benito Research
StatisticsMachine Learning
R
Rui Luo
Department of Systems Engineering, City University of Hong Kong, Hong Kong SAR, China