🤖 AI Summary
Predictive models often inherit and amplify systemic biases present in training data, undermining decision fairness. To address this, we propose a controllable bias injection framework for synthetic data generation, using proxy base models to emulate structural biases in loan approval and producing synthetic datasets with tunable bias intensity. Building upon this, we systematically evaluate the efficacy of preprocessing, in-processing, and post-processing debiasing methods under both offline and online learning settings. We further introduce a novel second-order Shapley value-based interpretability method to quantitatively characterize how debiasing strategies alter feature dependency structures and model reliance mechanisms. Experiments demonstrate that our framework enables precise bias replication and calibration, while revealing distinct internal decision-logic interventions induced by different debiasing techniques—thereby significantly enhancing the interpretability and controllability of fairness evaluation.
📝 Abstract
Predictive models often reinforce biases which were originally embedded in their training data, through skewed decisions. In such cases, mitigation methods are critical to ensure that, regardless of the prevailing disparities, model outcomes are adjusted to be fair. To assess this, datasets could be systematically generated with specific biases, to train machine learning classifiers. Then, predictive outcomes could aid in the understanding of this bias embedding process. Hence, an agent-based model (ABM), depicting a loan application process that represents various systemic biases across two demographic groups, was developed to produce synthetic datasets. Then, by applying classifiers trained on them to predict loan outcomes, we can assess how biased data leads to unfairness. This highlights a main contribution of this work: a framework for synthetic dataset generation with controllable bias injection. We also contribute with a novel explainability technique, which shows how mitigations affect the way classifiers leverage data features, via second-order Shapley values. In experiments, both offline and online learning approaches are employed. Mitigations are applied at different stages of the modelling pipeline, such as during pre-processing and in-processing.