🤖 AI Summary
This work addresses the limitation of classical conformal prediction—its restriction to classification/regression errors and inability to rigorously control semantic risk metrics (e.g., false negative rate, graph distance, token-level F1). We propose the first extension of conformal prediction that provably controls the expected value of any monotonic loss function. Our method builds upon split conformal prediction, integrating U-statistic-based expectation estimation with risk calibration theory to deliver a tight O(1/n) upper bound on expected risk. Key contributions are: (1) the first theoretically complete framework for expected risk control under general monotonic losses; (2) natural generalization to challenging settings—including distribution shift, quantile risks, multi-objective risks, and adversarial risks; and (3) empirical validation across CV and NLP tasks, demonstrating effective constraint satisfaction for semantic metrics and strong generalization.
📝 Abstract
We extend conformal prediction to control the expected value of any monotone loss function. The algorithm generalizes split conformal prediction together with its coverage guarantee. Like conformal prediction, the conformal risk control procedure is tight up to an $mathcal{O}(1/n)$ factor. We also introduce extensions of the idea to distribution shift, quantile risk control, multiple and adversarial risk control, and expectations of U-statistics. Worked examples from computer vision and natural language processing demonstrate the usage of our algorithm to bound the false negative rate, graph distance, and token-level F1-score.