Evolved SampleWeights for Bias Mitigation: Effectiveness Depends on Optimization Objectives

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the problem of biased predictions by machine learning models against marginalized groups in real-world data. To jointly optimize predictive accuracy and fairness, we propose a genetic algorithm-based sample weighting method that evolves instance-level weights through multi-objective optimization. Unlike conventional uniform or feature-driven weighting schemes, our approach simultaneously optimizes accuracy, AUC, demographic parity difference, and subgroup false negative rate. Extensive experiments on 11 publicly available datasets—including two healthcare benchmarks—demonstrate that the evolved weights substantially improve the fairness–performance trade-off. The most significant gains are achieved when jointly optimizing for accuracy and demographic parity difference, confirming the method’s effectiveness and generalizability in practical, high-stakes domains.

Technology Category

Application Category

📝 Abstract
Machine learning models trained on real-world data may inadvertently make biased predictions that negatively impact marginalized communities. Reweighting is a method that can mitigate such bias in model predictions by assigning a weight to each data point used during model training. In this paper, we compare three methods for generating these weights: (1) evolving them using a Genetic Algorithm (GA), (2) computing them using only dataset characteristics, and (3) assigning equal weights to all data points. Model performance under each strategy was evaluated using paired predictive and fairness metrics, which also served as optimization objectives for the GA during evolution. Specifically, we used two predictive metrics (accuracy and area under the Receiver Operating Characteristic curve) and two fairness metrics (demographic parity difference and subgroup false negative fairness). Using experiments on eleven publicly available datasets (including two medical datasets), we show that evolved sample weights can produce models that achieve better trade-offs between fairness and predictive performance than alternative weighting methods. However, the magnitude of these benefits depends strongly on the choice of optimization objectives. Our experiments reveal that optimizing with accuracy and demographic parity difference metrics yields the largest number of datasets for which evolved weights are significantly better than other weighting strategies in optimizing both objectives.
Problem

Research questions and friction points this paper is trying to address.

Mitigating biased predictions in ML models affecting marginalized communities
Comparing evolved vs computed sample weights for bias reduction
Evaluating fairness-performance tradeoffs across eleven public datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Genetic Algorithm evolves sample weights for bias mitigation
Optimization objectives balance accuracy and fairness metrics
Evolved weights outperform alternative methods on multiple datasets
🔎 Similar Papers
No similar papers found.
A
Anil K. Saini
Cedars-Sinai Medical Center, Los Angeles, USA
J
Jose Guadalupe Hernandez
Cedars-Sinai Medical Center, Los Angeles, USA
E
Emily F. Wong
Cedars-Sinai Medical Center, Los Angeles, USA
D
Debanshi Misra
University of California, Los Angeles, USA
Jason H. Moore
Jason H. Moore
Chair, Department of Computational Biomedicine, Cedars-Sinai Medical Center, Los Angeles, CA
Artificial IntelligenceMachine LearningBiomedical InformaticsPrecision MedicineTranslational Bioinformatics