🤖 AI Summary
This study investigates how fairness optimization in healthcare machine learning affects model interpretability—particularly Shapley value-based feature ranking—and clinical trust. We systematically evaluate the perturbation of fairness constraints on Shapley feature importance rankings across three real-world clinical datasets, employing multiple bias mitigation techniques. Our key findings are: (1) improved fairness significantly alters feature rankings across racial subgroups, with heterogeneous patterns across groups; and (2) optimizing fairness in isolation can degrade explanation stability, thereby undermining clinical credibility. To address this, we propose a triadic evaluation framework jointly optimizing accuracy, fairness, and interpretability—emphasizing their interdependence rather than independent pursuit. These results provide both theoretical foundations and practical guidelines for developing trustworthy, fair, and interpretable AI systems in clinical settings.
📝 Abstract
Trustworthy machine learning in healthcare requires strong predictive performance, fairness, and explanations. While it is known that improving fairness can affect predictive performance, little is known about how fairness improvements influence explainability, an essential ingredient for clinical trust. Clinicians may hesitate to rely on a model whose explanations shift after fairness constraints are applied. In this study, we examine how enhancing fairness through bias mitigation techniques reshapes Shapley-based feature rankings. We quantify changes in feature importance rankings after applying fairness constraints across three datasets: pediatric urinary tract infection risk, direct anticoagulant bleeding risk, and recidivism risk. We also evaluate multiple model classes on the stability of Shapley-based rankings. We find that increasing model fairness across racial subgroups can significantly alter feature importance rankings, sometimes in different ways across groups. These results highlight the need to jointly consider accuracy, fairness, and explainability in model assessment rather than in isolation.