🤖 AI Summary
This work addresses the challenge of achieving group-conditional coverage guarantees in federated learning, where calibration data are distributed across multiple clients and exhibit inherent group structure. The authors propose Group-Conditional Federated Conformal Prediction (GC-FCP), the first method to provide rigorous group-conditional coverage guarantees in a federated setting. At its core, GC-FCP constructs mergeable group-stratified coreset summaries from local calibration scores, enabling each client to transmit compact, weighted representations that support efficient communication and server-side aggregation. Empirical evaluations on both synthetic and real-world datasets demonstrate that GC-FCP consistently maintains its theoretical coverage guarantees while achieving predictive performance comparable to centralized conformal calibration baselines.
📝 Abstract
Deploying trustworthy AI systems requires principled uncertainty quantification. Conformal prediction (CP) is a widely used framework for constructing prediction sets with distribution-free coverage guarantees. In many practical settings, including healthcare, finance, and mobile sensing, the calibration data required for CP are distributed across multiple clients, each with its own local data distribution. In this federated setting, data can often be partitioned into, potentially overlapping, groups, which may reflect client-specific strata or cross-cutting attributes such as demographic or semantic categories. We propose group-conditional federated conformal prediction (GC-FCP), a novel protocol that provides group-conditional coverage guarantees. GC-FCP constructs mergeable, group-stratified coresets from local calibration scores, enabling clients to communicate compact weighted summaries that support efficient aggregation and calibration at the server. Experiments on synthetic and real-world datasets validate the performance of GC-FCP compared to centralized calibration baselines.