🤖 AI Summary
This study addresses the issue of model unfairness toward specific subpopulations in high-stakes clinical settings, stemming from biases in training data. Focusing on intensive care unit (ICU) environments, the authors systematically evaluate how incorporating external electronic health records—such as those from eICU and MIMIC-IV—affects fairness across subgroups. Through multi-source data integration, subgroup performance analysis, post-hoc calibration, and comparative data selection strategies, they demonstrate that simply increasing training data volume does not necessarily improve fairness and may even degrade subgroup performance, thereby challenging the prevailing “more data is better” paradigm. The work proposes a novel framework of joint data-model intervention, showing that combining targeted data augmentation with post-processing calibration effectively enhances both fairness and overall predictive performance.
📝 Abstract
In high-stakes settings where machine learning models are used to automate decision-making about individuals, the presence of algorithmic bias can exacerbate systemic harm to certain subgroups of people. These biases often stem from the underlying training data. In practice, interventions to "fix the data" depend on the actual additional data sources available -- where many are less than ideal. In these cases, the effects of data scaling on subgroup performance become volatile, as the improvements from increased sample size are counteracted by the introduction of distribution shifts in the training set. In this paper, we investigate the limitations of combining data sources to improve subgroup performance within the context of healthcare. Clinical models are commonly trained on datasets comprised of patient electronic health record (EHR) data from different hospitals or admission departments. Across two such datasets, the eICU Collaborative Research Database and the MIMIC-IV dataset, we find that data addition can both help and hurt model fairness and performance, and many intuitive strategies for data selection are unreliable. We compare model-based post-hoc calibration and data-centric addition strategies to find that the combination of both is important to improve subgroup performance. Our work questions the traditional dogma of "better data" for overcoming fairness challenges by comparing and combining data- and model-based approaches.