🤖 AI Summary
Kidney paired donation (KPD) systems exhibit disparities in transplant opportunities across protected attributes (e.g., race, gender), undermining equitable access.
Method: We propose a novel fairness criterion conditioned on recipient sensitization level—requiring match outcomes to be conditionally independent of sensitive attributes given sensitization—and integrate calibration-inspired fairness constraints into integer programming formulations for KPD optimization.
Contribution/Results: This is the first work to formalize and enforce calibration-based fairness in organ exchange. We theoretically characterize the fairness–efficiency trade-off, overcoming limitations of conventional group- or individual-level fairness notions that ignore conditional dependencies on sensitive attributes. Using random graph modeling and empirical validation on real-world KPD data, our approach maintains high exchange rates and computational efficiency while significantly improving fairness for protected groups; the associated fairness cost remains bounded and controllable.
📝 Abstract
The kidney paired donation (KPD) program provides an innovative solution to overcome incompatibility challenges in kidney transplants by matching incompatible donor-patient pairs and facilitating kidney exchanges. To address unequal access to transplant opportunities, there are two widely used fairness criteria: group fairness and individual fairness. However, these criteria do not consider protected patient features, which refer to characteristics legally or ethically recognized as needing protection from discrimination, such as race and gender. Motivated by the calibration principle in machine learning, we introduce a new fairness criterion: the matching outcome should be conditionally independent of the protected feature, given the sensitization level. We integrate this fairness criterion as a constraint within the KPD optimization framework and propose a computationally efficient solution. Theoretically, we analyze the associated price of fairness using random graph models. Empirically, we compare our fairness criterion with group fairness and individual fairness through both simulations and a real-data example.