π€ AI Summary
To address subgroup unfairness arising from intersections of multiple sensitive attributes, this paper proposes the One-vs.-One (OvO) fairness mitigation frameworkβthe first to systematically integrate pairwise subgroup comparison into fair machine learning. OvO explicitly models and mitigates overlapping bias, overcoming inherent limitations of single-attribute fairness approaches. The framework unifies preprocessing, in-processing, and post-processing paradigms, and supports six mainstream fairness metrics: demographic parity, equalized odds, and their variants (i.e., equal opportunity, predictive equality, treatment equality, and conditional statistical parity). Experiments on the Adult and COMPAS datasets demonstrate that OvO significantly outperforms state-of-the-art methods across all six fairness measures, particularly exhibiting superior robustness and generalization under complex intersectional discrimination scenarios.
π Abstract
With the widespread adoption of machine learning in the real world, the impact of the discriminatory bias has attracted attention. In recent years, various methods to mitigate the bias have been proposed. However, most of them have not considered intersectional bias, which brings unfair situations where people belonging to specific subgroups of a protected group are treated worse when multiple sensitive attributes are taken into consideration. To mitigate this bias, in this paper, we propose a method called One-vs.-One Mitigation by applying a process of comparison between each pair of subgroups related to sensitive attributes to the fairness-aware machine learning for binary classification. We compare our method and the conventional fairness-aware binary classification methods in comprehensive settings using three approaches (pre-processing, in-processing, and post-processing), six metrics (the ratio and difference of demographic parity, equalized odds, and equal opportunity), and two real-world datasets (Adult and COMPAS). As a result, our method mitigates the intersectional bias much better than conventional methods in all the settings. With the result, we open up the potential of fairness-aware binary classification for solving more realistic problems occurring when there are multiple sensitive attributes.