The Effect of Enforcing Fairness on Reshaping Explanations in Machine Learning Models

📅 2025-12-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how fairness optimization in healthcare machine learning affects model interpretability—particularly Shapley value-based feature ranking—and clinical trust. We systematically evaluate the perturbation of fairness constraints on Shapley feature importance rankings across three real-world clinical datasets, employing multiple bias mitigation techniques. Our key findings are: (1) improved fairness significantly alters feature rankings across racial subgroups, with heterogeneous patterns across groups; and (2) optimizing fairness in isolation can degrade explanation stability, thereby undermining clinical credibility. To address this, we propose a triadic evaluation framework jointly optimizing accuracy, fairness, and interpretability—emphasizing their interdependence rather than independent pursuit. These results provide both theoretical foundations and practical guidelines for developing trustworthy, fair, and interpretable AI systems in clinical settings.

Technology Category

Application Category

📝 Abstract
Trustworthy machine learning in healthcare requires strong predictive performance, fairness, and explanations. While it is known that improving fairness can affect predictive performance, little is known about how fairness improvements influence explainability, an essential ingredient for clinical trust. Clinicians may hesitate to rely on a model whose explanations shift after fairness constraints are applied. In this study, we examine how enhancing fairness through bias mitigation techniques reshapes Shapley-based feature rankings. We quantify changes in feature importance rankings after applying fairness constraints across three datasets: pediatric urinary tract infection risk, direct anticoagulant bleeding risk, and recidivism risk. We also evaluate multiple model classes on the stability of Shapley-based rankings. We find that increasing model fairness across racial subgroups can significantly alter feature importance rankings, sometimes in different ways across groups. These results highlight the need to jointly consider accuracy, fairness, and explainability in model assessment rather than in isolation.
Problem

Research questions and friction points this paper is trying to address.

Examining how fairness constraints reshape feature importance rankings in models
Quantifying changes in Shapley-based explanations across clinical and recidivism datasets
Highlighting the need to jointly assess accuracy, fairness, and explainability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Applying fairness constraints reshapes Shapley-based feature importance rankings
Evaluating fairness impact on explainability across multiple clinical datasets
Quantifying changes in feature rankings after bias mitigation techniques
🔎 Similar Papers
No similar papers found.