🤖 AI Summary
This work investigates the downstream fairness implications of mitigating intrinsic biases in large language models (LLMs). Addressing the risk that socioeconomic biases—particularly in high-stakes domains like finance—may propagate to deployed systems, we propose the first unified evaluation framework to systematically compare two bias-mitigation strategies: concept erasure (an intrinsic intervention) versus counterfactual data augmentation (an extrinsic intervention), under both frozen embedding extraction and fine-tuned classification paradigms. Experiments on real-world financial classification tasks show that concept erasure reduces intrinsic gender bias by 94.9%, improves downstream demographic parity by 82%, and preserves model accuracy. Our key contribution is the empirical demonstration of a strong positive causal transmission from early-stage intrinsic interventions to downstream fairness—revealing a reproducible, quantifiable pathway for pre-deployment bias governance in LLMs.
📝 Abstract
Large Language Models (LLMs) exhibit socio-economic biases that can propagate into downstream tasks. While prior studies have questioned whether intrinsic bias in LLMs affects fairness at the downstream task level, this work empirically investigates the connection. We present a unified evaluation framework to compare intrinsic bias mitigation via concept unlearning with extrinsic bias mitigation via counterfactual data augmentation (CDA). We examine this relationship through real-world financial classification tasks, including salary prediction, employment status, and creditworthiness assessment. Using three open-source LLMs, we evaluate models both as frozen embedding extractors and as fine-tuned classifiers. Our results show that intrinsic bias mitigation through unlearning reduces intrinsic gender bias by up to 94.9%, while also improving downstream task fairness metrics, such as demographic parity by up to 82%, without compromising accuracy. Our framework offers practical guidance on where mitigation efforts can be most effective and highlights the importance of applying early-stage mitigation before downstream deployment.