🤖 AI Summary
This work addresses the challenge of simultaneously satisfying multiple fairness definitions in multiclass classification, a limitation of existing fair learning methods. The authors formulate fair learning as a multi-objective optimization problem that balances model accuracy against multiple linear fairness constraints and propose the Generalized Exponential Gradient (GEG) algorithm to solve it. GEG is the first method to unify the handling of diverse fairness constraints across both binary and multiclass settings, offering notable flexibility and generality. Extensive experiments on ten datasets demonstrate that GEG improves fairness by up to 92% while sacrificing at most 14% in accuracy, significantly outperforming six baseline approaches.
📝 Abstract
The widespread use of AI and ML models in sensitive areas raises significant concerns about fairness. While the research community has introduced various methods for bias mitigation in binary classification tasks, the issue remains under-explored in multi-class classification settings. To address this limitation, in this paper, we first formulate the problem of fair learning in multi-class classification as a multi-objective problem between effectiveness (i.e., prediction correctness) and multiple linear fairness constraints. Next, we propose a Generalised Exponentiated Gradient (GEG) algorithm to solve this task. GEG is an in-processing algorithm that enhances fairness in binary and multi-class classification settings under multiple fairness definitions. We conduct an extensive empirical evaluation of GEG against six baselines across seven multi-class and three binary datasets, using four widely adopted effectiveness metrics and three fairness definitions. GEG overcomes existing baselines, with fairness improvements up to 92% and a decrease in accuracy up to 14%.