๐ค AI Summary
This work addresses the lack of standardized evaluation protocols for uncertainty quantification in graph node classification. We systematically formalize and reconstruct the methodological framework of graph conformal prediction (GCP). Specifically, we propose the first comprehensive design-choice analysis framework tailored to graph-structured data, encompassing standardized baseline models, calibration strategies, and evaluation metrics. To enhance scalability, we introduce novel acceleration techniques that significantly reduce computational complexity, enabling efficient inference on large-scale graphs. Extensive experiments across multiple benchmark datasets validate both theoretical soundness and practical efficacy: empirical coverage strictly satisfies user-specified confidence levels, while inference speed improves by up to several-fold. We publicly release fully reproducible code and a detailed practitionerโs guide, facilitating the standardization and industrial adoption of uncertainty quantification for graph learning.
๐ Abstract
Conformal prediction has become increasingly popular for quantifying the uncertainty associated with machine learning models. Recent work in graph uncertainty quantification has built upon this approach for conformal graph prediction. The nascent nature of these explorations has led to conflicting choices for implementations, baselines, and method evaluation. In this work, we analyze the design choices made in the literature and discuss the tradeoffs associated with existing methods. Building on the existing implementations for existing methods, we introduce techniques to scale existing methods to large-scale graph datasets without sacrificing performance. Our theoretical and empirical results justify our recommendations for future scholarship in graph conformal prediction.