A Generalised Exponentiated Gradient Approach to Enhance Fairness in Binary and Multi-class Classification Tasks

📅 2026-03-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of simultaneously satisfying multiple fairness definitions in multiclass classification, a limitation of existing fair learning methods. The authors formulate fair learning as a multi-objective optimization problem that balances model accuracy against multiple linear fairness constraints and propose the Generalized Exponential Gradient (GEG) algorithm to solve it. GEG is the first method to unify the handling of diverse fairness constraints across both binary and multiclass settings, offering notable flexibility and generality. Extensive experiments on ten datasets demonstrate that GEG improves fairness by up to 92% while sacrificing at most 14% in accuracy, significantly outperforming six baseline approaches.

Technology Category

Application Category

📝 Abstract
The widespread use of AI and ML models in sensitive areas raises significant concerns about fairness. While the research community has introduced various methods for bias mitigation in binary classification tasks, the issue remains under-explored in multi-class classification settings. To address this limitation, in this paper, we first formulate the problem of fair learning in multi-class classification as a multi-objective problem between effectiveness (i.e., prediction correctness) and multiple linear fairness constraints. Next, we propose a Generalised Exponentiated Gradient (GEG) algorithm to solve this task. GEG is an in-processing algorithm that enhances fairness in binary and multi-class classification settings under multiple fairness definitions. We conduct an extensive empirical evaluation of GEG against six baselines across seven multi-class and three binary datasets, using four widely adopted effectiveness metrics and three fairness definitions. GEG overcomes existing baselines, with fairness improvements up to 92% and a decrease in accuracy up to 14%.
Problem

Research questions and friction points this paper is trying to address.

fairness
multi-class classification
bias mitigation
fair learning
linear fairness constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generalised Exponentiated Gradient
multi-class fairness
in-processing algorithm
multi-objective optimization
bias mitigation
🔎 Similar Papers
No similar papers found.
M
Maryam Boubekraoui
aLaboratoire d’Intelligence Artificielle et Systèmes Industriels (LIASI), HESTIM Engineering and Business School, Casablanca, Morocco; bLaboratory LAMAI, Faculty of Sciences and Technologies, Cadi Ayyad University, Marrakech, Morocco
Giordano d'Aloisio
Giordano d'Aloisio
Postdoctoral Researcher, Università degli Studi dell'Aquila
Software FairnessSustainabilitySoftware EngineeringEmpirical Software EngineeringData Science
Antinisca Di Marco
Antinisca Di Marco
Associate Professor at Università degli Studi di L'Aquila
Software Quality EngineeringAI FairnessData ScienceBioinformatics