๐ค AI Summary
To address the issue of excessively large prediction sets and limited practicality of conformal prediction in high-risk applications, this paper proposes a two-stage Selective Conformal Risk Control (SCRC) framework: first, selective classification filters high-confidence samples; second, calibrated risk control is applied only on the selected subset. SCRC unifies conformal prediction with selective classification for the first time, yielding two novel algorithmsโSCRC-T, which guarantees exact coverage under finite-sample settings, and SCRC-I, which provides PAC-style risk guarantees with improved computational efficiency. Both algorithms are theoretically proven to satisfy the target coverage level and a user-specified risk threshold. Empirical evaluation on two public benchmark datasets confirms strict adherence to coverage and risk constraints; SCRC-I achieves comparable predictive performance to SCRC-T while offering superior computational efficiency and more conservative risk control. The core contribution lies in jointly ensuring statistical reliability, prediction set compactness, and computational feasibility.
๐ Abstract
Reliable uncertainty quantification is essential for deploying machine learning systems in high-stakes domains. Conformal prediction provides distribution-free coverage guarantees but often produces overly large prediction sets, limiting its practical utility. To address this issue, we propose extit{Selective Conformal Risk Control} (SCRC), a unified framework that integrates conformal prediction with selective classification. The framework formulates uncertainty control as a two-stage problem: the first stage selects confident samples for prediction, and the second stage applies conformal risk control on the selected subset to construct calibrated prediction sets. We develop two algorithms under this framework. The first, SCRC-T, preserves exchangeability by computing thresholds jointly over calibration and test samples, offering exact finite-sample guarantees. The second, SCRC-I, is a calibration-only variant that provides PAC-style probabilistic guarantees while being more computational efficient. Experiments on two public datasets show that both methods achieve the target coverage and risk levels, with nearly identical performance, while SCRC-I exhibits slightly more conservative risk control but superior computational practicality. Our results demonstrate that selective conformal risk control offers an effective and efficient path toward compact, reliable uncertainty quantification.