Conditional Conformal Risk Adaptation

📅 2025-04-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the imbalance in image-level conditional risks (e.g., false negative rate) and the difficulty of satisfying clinical personalized decision-making requirements in medical image segmentation, this paper proposes an adaptive conformal risk control framework. We first establish a unified theoretical connection between weighted quantiles and conformal prediction. Then, we design two novel methods: CCRA, enhanced with probability calibration for pixel-wise calibration, and CCRA-S, incorporating hierarchical image feature representation to optimize stratified risk thresholds. Evaluated on polyp segmentation, our approach strictly guarantees marginal risk constraints while reducing inter-image false negative rate variance by over 40%, significantly outperforming existing conformal methods. This work introduces a verifiable and controllable uncertainty quantification paradigm for high-stakes medical AI systems, enabling rigorous, clinically adaptable risk management.

Technology Category

Application Category

📝 Abstract
Uncertainty quantification is becoming increasingly important in image segmentation, especially for high-stakes applications like medical imaging. While conformal risk control generalizes conformal prediction beyond standard miscoverage to handle various loss functions such as false negative rate, its application to segmentation often yields inadequate conditional risk control: some images experience very high false negative rates while others have negligibly small ones. We develop Conformal Risk Adaptation (CRA), which introduces a new score function for creating adaptive prediction sets that significantly improve conditional risk control for segmentation tasks. We establish a novel theoretical framework that demonstrates a fundamental connection between conformal risk control and conformal prediction through a weighted quantile approach, applicable to any score function. To address the challenge of poorly calibrated probabilities in segmentation models, we introduce a specialized probability calibration framework that enhances the reliability of pixel-wise inclusion estimates. Using these calibrated probabilities, we propose Calibrated Conformal Risk Adaptation (CCRA) and a stratified variant (CCRA-S) that partitions images based on their characteristics and applies group-specific thresholds to further enhance conditional risk control. Our experiments on polyp segmentation demonstrate that all three methods (CRA, CCRA, and CCRA-S) provide valid marginal risk control and deliver more consistent conditional risk control across diverse images compared to standard approaches, offering a principled approach to uncertainty quantification that is particularly valuable for high-stakes and personalized segmentation applications.
Problem

Research questions and friction points this paper is trying to address.

Improves conditional risk control in image segmentation
Addresses poorly calibrated probabilities in segmentation models
Enhances uncertainty quantification for high-stakes medical imaging
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive prediction sets improve risk control
Weighted quantile links risk control and prediction
Calibrated probabilities enhance pixel-wise reliability
🔎 Similar Papers
No similar papers found.
R
Rui Luo
Department of Systems Engineering, City University of Hong Kong
Zhixin Zhou
Zhixin Zhou
Alpha Benito Research
StatisticsMachine Learning