🤖 AI Summary
This work proposes FairMed-XGB, a novel framework that addresses demographic bias—particularly gender disparity—in clinical machine learning models, which can undermine diagnostic fairness and clinician trust. FairMed-XGB uniquely integrates statistical parity difference, the Theil index, and Wasserstein distance directly into the XGBoost loss function, jointly optimizing predictive accuracy and gender fairness through Bayesian optimization. Additionally, SHAP values are employed to provide model interpretability. Evaluated on the MIMIC-IV-ED and eICU datasets, the method substantially reduces gender bias—achieving 10–51% lower statistical parity difference, Theil indices approaching zero, and 20–72% reductions in Wasserstein distance—while maintaining high predictive performance, with AUC-ROC decreasing by less than 0.02. This demonstrates that strong fairness guarantees can be achieved without sacrificing clinical-grade accuracy.
📝 Abstract
Machine learning models deployed in critical care settings exhibit demographic biases, particularly gender disparities, that undermine clinical trust and equitable treatment. This paper introduces FairMed-XGB, a novel framework that systematically detects and mitigates gender-based prediction bias while preserving model performance and transparency. The framework integrates a fairness-aware loss function combining Statistical Parity Difference, Theil Index, and Wasserstein Distance, jointly optimised via Bayesian Search into an XGBoost classifier. Post-mitigation evaluation on seven clinically distinct cohorts derived from the MIMIC-IV-ED and eICU databases demonstrates substantial bias reduction: Statistical Parity Difference decreases by 40 to 51 percent on MIMIC-IV-ED and 10 to 19 percent on eICU; Theil Index collapses by four to five orders of magnitude to near-zero values; Wasserstein Distance is reduced by 20 to 72 percent. These gains are achieved with negligible degradation in predictive accuracy (AUC-ROC drop <0.02). SHAP-based explainability reveals that the framework diminishes reliance on gender-proxy features, providing clinicians with actionable insights into how and where bias is corrected. FairMed-XGB offers a robust, interpretable, and ethically aligned solution for equitable clinical decision-making, paving the way for trustworthy deployment of AI in high-stakes healthcare environments.