Fairness-informed Pareto Optimization : An Efficient Bilevel Framework

📅 2026-01-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing fair machine learning methods, which often yield Pareto-inefficient solutions or are constrained by the perspective bias inherent in specific fairness metrics. To overcome these challenges, the authors propose BADR, a bilevel optimization framework: the lower-level problem learns a model via weighted empirical risk minimization, while the upper-level problem adaptively rescales group weights to optimize any user-specified fairness objective, thereby recovering Pareto-optimal trade-offs between fairness and accuracy. BADR is agnostic to the choice of fairness metric, transcending the efficiency and perspective constraints of conventional approaches. The paper further introduces two efficient single-loop algorithms, BADR-GD and BADR-SGD. Extensive experiments demonstrate that BADR consistently outperforms existing Pareto-efficient methods across diverse tasks and fairness measures, and the authors release the badr toolbox to facilitate reproducibility and adoption.

Technology Category

Application Category

📝 Abstract
Despite their promise, fair machine learning methods often yield Pareto-inefficient models, in which the performance of certain groups can be improved without degrading that of others. This issue arises frequently in traditional in-processing approaches such as fairness-through-regularization. In contrast, existing Pareto-efficient approaches are biased towards a certain perspective on fairness and fail to adapt to the broad range of fairness metrics studied in the literature. In this paper, we present BADR, a simple framework to recover the optimal Pareto-efficient model for any fairness metric. Our framework recovers its models through a Bilevel Adaptive Rescalarisation procedure. The lower level is a weighted empirical risk minimization task where the weights are a convex combination of the groups, while the upper level optimizes the chosen fairness objective. We equip our framework with two novel large-scale, single-loop algorithms, BADR-GD and BADR-SGD, and establish their convergence guarantees. We release badr, an open-source Python toolbox implementing our framework for a variety of learning tasks and fairness metrics. Finally, we conduct extensive numerical experiments demonstrating the advantages of BADR over existing Pareto-efficient approaches to fairness.
Problem

Research questions and friction points this paper is trying to address.

fair machine learning
Pareto efficiency
fairness metrics
bilevel optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pareto efficiency
bilevel optimization
fair machine learning
adaptive rescaling
single-loop algorithm
🔎 Similar Papers
No similar papers found.
S
Sofiane Tanji
INMA/ICTEAM, Université Catholique de Louvain, B-1348 Louvain-la-Neuve, Belgium
Samuel Vaiter
Samuel Vaiter
CNRS Researcher
applied mathematicsmachine learningoptimization
Y
Yassine Laguel
Lab. Jean Alexandre Dieudonné, Université Côte d’Azur, Nice, 06000, France