Conf-GNNRec: Quantifying and Calibrating the Prediction Confidence for GNN-based Recommendation Methods

📅 2025-05-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph neural network (GNN)-based recommender systems suffer from unreliable and overconfident predictions under noise accumulation and data sparsity. To address this, we propose the first prediction confidence quantification and calibration framework specifically designed for GNN recommenders. Our method introduces three key innovations: (1) a dynamic score calibration mechanism that incorporates user-specific bias modeling; (2) a confidence-aware loss function tailored for negative sampling, explicitly penalizing overconfidence; and (3) robust message propagation modeling to stabilize feature aggregation under noisy conditions. Extensive experiments on multiple public benchmark datasets demonstrate significant improvements: average Recall@20 increases by 3.2%, while Expected Calibration Error (ECE) decreases by 41.7%, confirming enhanced accuracy and calibration reliability—particularly in high-noise scenarios. The framework thus improves both predictive trustworthiness and robustness of GNN-based recommendation.

Technology Category

Application Category

📝 Abstract
Recommender systems based on graph neural networks perform well in tasks such as rating and ranking. However, in real-world recommendation scenarios, noise such as user misuse and malicious advertisement gradually accumulates through the message propagation mechanism. Even if existing studies mitigate their effects by reducing the noise propagation weights, the severe sparsity of the recommender system still leads to the low-weighted noisy neighbors being mistaken as meaningful information, and the prediction result obtained based on the polluted nodes is not entirely trustworthy. Therefore, it is crucial to measure the confidence of the prediction results in this highly noisy framework. Furthermore, our evaluation of the existing representative GNN-based recommendation shows that it suffers from overconfidence. Based on the above considerations, we propose a new method to quantify and calibrate the prediction confidence of GNN-based recommendations (Conf-GNNRec). Specifically, we propose a rating calibration method that dynamically adjusts excessive ratings to mitigate overconfidence based on user personalization. We also design a confidence loss function to reduce the overconfidence of negative samples and effectively improve recommendation performance. Experiments on public datasets demonstrate the validity of Conf-GNNRec in prediction confidence and recommendation performance.
Problem

Research questions and friction points this paper is trying to address.

Quantifying prediction confidence in GNN-based recommender systems
Addressing overconfidence in existing GNN recommendation methods
Calibrating ratings to mitigate noise and improve recommendation accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic rating calibration for overconfidence mitigation
Confidence loss function for negative samples
Quantifies and calibrates GNN prediction confidence
🔎 Similar Papers
No similar papers found.
M
Meng Yan
Xidian University, Xi’an, China
Cai Xu
Cai Xu
Xidian University
multi-view (modal) learningtrustworthy machine learning
X
Xujing Wang
Xidian University, Xi’an, China
Ziyu Guan
Ziyu Guan
Xidian University
Data miningmachine learningsocial media
W
Wei Zhao
Xidian University, Xi’an, China
Y
Yuhang Zhou
Communication University Of China, Beijing, China