🤖 AI Summary
This study addresses the critical gap in understanding how label bias and selection bias systematically affect the evaluation, performance, and efficacy of fairness mitigation methods in classification models, often leading to unrepresentative assessments. To this end, we propose the first evaluation framework that enables controlled introduction of distinct bias types. By constructing a “fair world” from a real low-discrimination dataset and its biased variants, our approach disentangles the individual effects of label and selection bias. Evaluating models and fairness interventions on an unbiased test set reveals that the type of bias significantly modulates the effectiveness of mitigation strategies. Notably, under unbiased conditions, we find no inherent trade-offs between fairness and accuracy or between individual and group fairness.
📝 Abstract
Bias can be introduced in diverse ways in machine learning datasets, for example via selection or label bias. Although these bias types in themselves have an influence on important aspects of fair machine learning, their different impact has been understudied. In this work, we empirically analyze the effect of label bias and several subtypes of selection bias on the evaluation of classification models, on their performance, and on the effectiveness of bias mitigation methods. We also introduce a biasing and evaluation framework that allows to model fair worlds and their biased counterparts through the introduction of controlled bias in real-life datasets with low discrimination. Using our framework, we empirically analyze the impact of each bias type independently, while obtaining a more representative evaluation of models and mitigation methods than with the traditional use of a subset of biased data as test set. Our results highlight different factors that influence how impactful bias is on model performance. They also show an absence of trade-off between fairness and accuracy, and between individual and group fairness, when models are evaluated on a test set that does not exhibit unwanted bias. They furthermore indicate that the performance of bias mitigation methods is influenced by the type of bias present in the data. Our findings call for future work to develop more accurate evaluations of prediction models and fairness interventions, but also to better understand other types of bias, more complex scenarios involving the combination of different bias types, and other factors that impact the efficiency of the mitigation methods, such as dataset characteristics.