Software Fairness Dilemma: Is Bias Mitigation a Zero-Sum Game?

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates whether bias mitigation in tabular machine learning inherently entails a “zero-sum game”—i.e., whether improving fairness for disadvantaged groups necessarily degrades performance for advantaged groups or overall model utility. We systematically evaluate eight state-of-the-art preprocessing, in-processing, and post-processing bias mitigation methods across 44 tasks, using five real-world datasets and four mainstream model families. Empirical results confirm that most existing methods exhibit significant zero-sum trade-offs. To address this, we propose a novel paradigm—*targeted disadvantaged-group mitigation*—which applies advanced mitigation techniques exclusively to disadvantaged subgroups. Our experiments demonstrate that this strategy substantially improves fairness for these groups (e.g., reducing equal opportunity difference by 32–67%) without compromising advantaged-group performance or overall accuracy. This work provides the first empirical evidence and practical methodology for achieving *non-zero-sum fairness*, challenging the prevailing assumption of inherent fairness–utility trade-offs in tabular ML.

Technology Category

Application Category

📝 Abstract
Fairness is a critical requirement for Machine Learning (ML) software, driving the development of numerous bias mitigation methods. Previous research has identified a leveling-down effect in bias mitigation for computer vision and natural language processing tasks, where fairness is achieved by lowering performance for all groups without benefiting the unprivileged group. However, it remains unclear whether this effect applies to bias mitigation for tabular data tasks, a key area in fairness research with significant real-world applications. This study evaluates eight bias mitigation methods for tabular data, including both widely used and cutting-edge approaches, across 44 tasks using five real-world datasets and four common ML models. Contrary to earlier findings, our results show that these methods operate in a zero-sum fashion, where improvements for unprivileged groups are related to reduced benefits for traditionally privileged groups. However, previous research indicates that the perception of a zero-sum trade-off might complicate the broader adoption of fairness policies. To explore alternatives, we investigate an approach that applies the state-of-the-art bias mitigation method solely to unprivileged groups, showing potential to enhance benefits of unprivileged groups without negatively affecting privileged groups or overall ML performance. Our study highlights potential pathways for achieving fairness improvements without zero-sum trade-offs, which could help advance the adoption of bias mitigation methods.
Problem

Research questions and friction points this paper is trying to address.

Evaluates bias mitigation methods for tabular data fairness
Examines zero-sum trade-offs in fairness for privileged groups
Explores non-zero-sum approaches to benefit unprivileged groups
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluates eight bias mitigation methods for tabular data
Investigates zero-sum fairness trade-offs in bias mitigation
Applies bias mitigation solely to unprivileged groups
🔎 Similar Papers
No similar papers found.