The Final Layer Holds the Key: A Unified and Efficient GNN Calibration Framework

๐Ÿ“… 2025-05-16
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Graph Neural Networks (GNNs) commonly suffer from underconfident predictions, undermining decision reliability; existing calibration methods rely on auxiliary modules, lack theoretical grounding, and incur additional computational overhead. Method: We propose Terminal-layer Unified Calibration (TUC), a parameter- and architecture-free framework. We first reveal that terminal-layer confidence is jointly governed by class-center-level and node-level calibration. Theoretically, we show that reducing terminal-layer weight decay mitigates underconfidence, while a node-level distance constraint pulls test nodes closer to their predicted class centers. TUC models class centers directly from terminal-layer representations and jointly optimizes weight decay and node-to-center distances. Results: On multiple benchmark datasets, TUC significantly reduces Expected Calibration Error (ECE), achieving an average improvement of 28.6% over prior state-of-the-art methodsโ€”while introducing zero additional parameters and incurring negligible computational overhead.

Technology Category

Application Category

๐Ÿ“ Abstract
Graph Neural Networks (GNNs) have demonstrated remarkable effectiveness on graph-based tasks. However, their predictive confidence is often miscalibrated, typically exhibiting under-confidence, which harms the reliability of their decisions. Existing calibration methods for GNNs normally introduce additional calibration components, which fail to capture the intrinsic relationship between the model and the prediction confidence, resulting in limited theoretical guarantees and increased computational overhead. To address this issue, we propose a simple yet efficient graph calibration method. We establish a unified theoretical framework revealing that model confidence is jointly governed by class-centroid-level and node-level calibration at the final layer. Based on this insight, we theoretically show that reducing the weight decay of the final-layer parameters alleviates GNN under-confidence by acting on the class-centroid level, while node-level calibration acts as a finer-grained complement to class-centroid level calibration, which encourages each test node to be closer to its predicted class centroid at the final-layer representations. Extensive experiments validate the superiority of our method.
Problem

Research questions and friction points this paper is trying to address.

GNNs exhibit miscalibrated predictive confidence harming reliability
Existing methods lack theoretical guarantees and increase overhead
Proposes unified framework linking final-layer parameters to calibration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified GNN calibration via final-layer analysis
Reduce final-layer weight decay for under-confidence
Node-level complements class-centroid calibration
๐Ÿ”Ž Similar Papers
No similar papers found.
Jincheng Huang
Jincheng Huang
Ph.D Candidate, University of Electronic Science and technology of China & SUTD
Graph Machine LearningTrustworthy AIVLMReliable Prediction
J
Jie Xu
School of Computer Science and Engineering, University of Electronic Science and Technology of China
Xiaoshuang Shi
Xiaoshuang Shi
University of Electronic Science and Technology of China
Machine learningcomputer visionmedical image analysis
Ping Hu
Ping Hu
UESTC
Computer VisionDeep LearningImage/Video Processing
L
Lei Feng
Southeast University
X
Xiaofeng Zhu
School of Computer Science and Engineering, University of Electronic Science and Technology of China