🤖 AI Summary
To address data silos, communication constraints, and the absence of central coordination in safety-critical edge healthcare settings, this paper pioneers the extension of conformal prediction to decentralized graph networks. We propose two distributed calibration paradigms: Quantized Decentralized Conformal Prediction (Q-DCP), which accelerates convergence via quantized smoothing, and Histogram-based Decentralized Conformal Prediction (H-DCP), which leverages consensus histograms. Our framework integrates distributed quantile regression, graph message passing, and regularized smoothing optimization, guaranteeing exact statistical coverage on arbitrary graph topologies. We establish theoretical asymptotic optimality and demonstrate empirically that our methods significantly reduce communication overhead, improve hyperparameter robustness, and yield prediction set sizes approaching those of centralized baselines. This work delivers the first theoretically grounded, decentralized calibration framework for trustworthy edge AI.
📝 Abstract
Post-hoc calibration of pre-trained models is critical for ensuring reliable inference, especially in safety-critical domains such as healthcare. Conformal Prediction (CP) offers a robust post-hoc calibration framework, providing distribution-free statistical coverage guarantees for prediction sets by leveraging held-out datasets. In this work, we address a decentralized setting where each device has limited calibration data and can communicate only with its neighbors over an arbitrary graph topology. We propose two message-passing-based approaches for achieving reliable inference via CP: quantile-based distributed conformal prediction (Q-DCP) and histogram-based distributed conformal prediction (H-DCP). Q-DCP employs distributed quantile regression enhanced with tailored smoothing and regularization terms to accelerate convergence, while H-DCP uses a consensus-based histogram estimation approach. Through extensive experiments, we investigate the trade-offs between hyperparameter tuning requirements, communication overhead, coverage guarantees, and prediction set sizes across different network topologies.