Counterfactually Fair Conformal Prediction

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the lack of fairness guarantees in conformal prediction by proposing CF-CP, the first framework for constructing counterfactually fair prediction sets. The core challenge lies in ensuring marginal coverage—i.e., guaranteeing that the prediction set contains the true label with probability at least $1-alpha$—while eliminating influence of protected attributes (e.g., gender, race) on uncertainty quantification. CF-CP achieves this by defining and enforcing distributional symmetry of conformity scores under counterfactual interventions, without modifying or retraining the underlying predictive model. Thus, it simultaneously attains statistical validity and counterfactual fairness. Experiments across synthetic and multiple real-world datasets demonstrate CF-CP’s effectiveness for both classification and regression tasks: it reliably achieves the target coverage level while incurring only negligible inflation in prediction set size.

Technology Category

Application Category

📝 Abstract
While counterfactual fairness of point predictors is well studied, its extension to prediction sets--central to fair decision-making under uncertainty--remains underexplored. On the other hand, conformal prediction (CP) provides efficient, distribution-free, finite-sample valid prediction sets, yet does not ensure counterfactual fairness. We close this gap by developing Counterfactually Fair Conformal Prediction (CF-CP) that produces counterfactually fair prediction sets. Through symmetrization of conformity scores across protected-attribute interventions, we prove that CF-CP results in counterfactually fair prediction sets while maintaining the marginal coverage property. Furthermore, we empirically demonstrate that on both synthetic and real datasets, across regression and classification tasks, CF-CP achieves the desired counterfactual fairness and meets the target coverage rate with minimal increase in prediction set size. CF-CP offers a simple, training-free route to counterfactually fair uncertainty quantification.
Problem

Research questions and friction points this paper is trying to address.

Extends counterfactual fairness from point predictors to prediction sets
Ensures prediction sets maintain fairness under protected-attribute interventions
Achieves counterfactual fairness while preserving marginal coverage guarantees
Innovation

Methods, ideas, or system contributions that make the work stand out.

Symmetrizing conformity scores across protected attributes
Maintaining marginal coverage in fair prediction sets
Providing training-free fair uncertainty quantification
🔎 Similar Papers
No similar papers found.
O
Ozgur Guldogan
UC Santa Barbara
Neeraj Sarna
Neeraj Sarna
Munich RE
data-driven methodsmodel-order reductionscientific computing
Y
Yuanyuan Li
Munich RE
M
Michael Berger
Munich RE