Decision Theoretic Foundations for Conformal Prediction: Optimal Uncertainty Quantification for Risk-Averse Agents

📅 2025-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the joint optimization of predictive uncertainty quantification and downstream decision-making for risk-averse decision-makers—such as clinicians—in high-stakes settings. We propose the Risk-Averse Calibration (RAC) framework, which (i) establishes, for the first time, the statistical optimality of prediction sets for Value-at-Risk (VaR) minimization; (ii) introduces a coupled max-min optimal decision mechanism that jointly optimizes prediction sets and action policies; and (iii) provides a distribution-free, finite-sample-constructible method grounded in conformal prediction, decision theory, and robust statistical inference. Empirically, on medical diagnosis and recommendation tasks, RAC achieves significantly higher utility than existing uncertainty quantification methods while rigorously satisfying user-specified risk constraints—demonstrating both theoretical soundness and practical efficacy in safety-critical applications.

Technology Category

Application Category

📝 Abstract
A fundamental question in data-driven decision making is how to quantify the uncertainty of predictions in ways that can usefully inform downstream action. This interface between prediction uncertainty and decision-making is especially important in risk-sensitive domains, such as medicine. In this paper, we develop decision-theoretic foundations that connect uncertainty quantification using prediction sets with risk-averse decision-making. Specifically, we answer three fundamental questions: (1) What is the correct notion of uncertainty quantification for risk-averse decision makers? We prove that prediction sets are optimal for decision makers who wish to optimize their value at risk. (2) What is the optimal policy that a risk averse decision maker should use to map prediction sets to actions? We show that a simple max-min decision policy is optimal for risk-averse decision makers. Finally, (3) How can we derive prediction sets that are optimal for such decision makers? We provide an exact characterization in the population regime and a distribution free finite-sample construction. Answering these questions naturally leads to an algorithm, Risk-Averse Calibration (RAC), which follows a provably optimal design for deriving action policies from predictions. RAC is designed to be both practical-capable of leveraging the quality of predictions in a black-box manner to enhance downstream utility-and safe-adhering to a user-defined risk threshold and optimizing the corresponding risk quantile of the user's downstream utility. Finally, we experimentally demonstrate the significant advantages of RAC in applications such as medical diagnosis and recommendation systems. Specifically, we show that RAC achieves a substantially improved trade-off between safety and utility, offering higher utility compared to existing methods while maintaining the safety guarantee.
Problem

Research questions and friction points this paper is trying to address.

Quantify uncertainty for risk-averse decision-making.
Develop optimal policies for risk-sensitive domains.
Enhance safety and utility in medical diagnosis.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Risk-Averse Calibration algorithm
Optimal prediction sets
Max-min decision policy
🔎 Similar Papers
No similar papers found.