🤖 AI Summary
This work addresses the challenge of optimally leveraging prediction sets for downstream decision-making while guaranteeing a prescribed coverage probability. To this end, the authors propose a decision-theoretic framework that jointly optimizes the construction of prediction sets and the associated decision policy under the worst-case distribution satisfying the coverage constraint. This approach uniquely unifies worst-case loss and out-of-set penalties within a single modeling paradigm. By integrating minimax optimization with risk-sensitive conformal prediction techniques, the method provides rigorous theoretical guarantees while enhancing practical utility. The resulting Risk-Optimized Conformal Prediction (ROCP) algorithm significantly reduces high-cost errors in high-stakes applications such as medical diagnosis, outperforming existing baselines in both safety and performance.
📝 Abstract
Prediction sets can wrap around any ML model to cover unknown test outcomes with a guaranteed probability. Yet, it remains unclear how to use them optimally for downstream decision-making. Here, we propose a decision-theoretic framework that seeks to minimize the expected loss (risk) against a worst-case distribution consistent with the prediction set's coverage guarantee. We first characterize the minimax optimal policy for a fixed prediction set, showing that it balances the worst-case loss inside the set with a penalty for potential losses outside the set. Building on this, we derive the optimal prediction set construction that minimizes the resulting robust risk subject to a coverage constraint. Finally, we introduce Risk-Optimal Conformal Prediction (ROCP), a practical algorithm that targets these risk-minimizing sets while maintaining finite-sample distribution-free marginal coverage. Empirical evaluations on medical diagnosis and safety-critical decision-making tasks demonstrate that ROCP reduces critical mistakes compared to baselines, particularly when out-of-set errors are costly.