pFedFair: Towards Optimal Group Fairness-Accuracy Trade-off in Heterogeneous Federated Learning

📅 2025-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In heterogeneous federated learning (FL), achieving group fairness—e.g., statistical parity or equal opportunity—while maintaining high model accuracy remains challenging due to conflicting optimization objectives. Method: This paper proposes pFedFair, a personalized FL framework that dynamically optimizes the fairness–accuracy trade-off *locally* at each client, avoiding global fairness constraints that degrade accuracy. It enforces sensitivity-agnostic constraints *client-wise*, enabling provable group fairness guarantees under non-IID data. The approach integrates personalized model updates, fairness-aware embedding adaptation, and distributed optimization. Contribution/Results: Extensive experiments on benchmark and synthetic datasets demonstrate that pFedFair significantly outperforms existing FL methods, achieving state-of-the-art Pareto-optimal trade-offs between classification accuracy and multiple group fairness metrics—without sacrificing predictive performance.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) algorithms commonly aim to maximize clients' accuracy by training a model on their collective data. However, in several FL applications, the model's decisions should meet a group fairness constraint to be independent of sensitive attributes such as gender or race. While such group fairness constraints can be incorporated into the objective function of the FL optimization problem, in this work, we show that such an approach would lead to suboptimal classification accuracy in an FL setting with heterogeneous client distributions. To achieve an optimal accuracy-group fairness trade-off, we propose the Personalized Federated Learning for Client-Level Group Fairness (pFedFair) framework, where clients locally impose their fairness constraints over the distributed training process. Leveraging the image embedding models, we extend the application of pFedFair to computer vision settings, where we numerically show that pFedFair achieves an optimal group fairness-accuracy trade-off in heterogeneous FL settings. We present the results of several numerical experiments on benchmark and synthetic datasets, which highlight the suboptimality of non-personalized FL algorithms and the improvements made by the pFedFair method.
Problem

Research questions and friction points this paper is trying to address.

Achieve optimal fairness-accuracy trade-off in federated learning.
Address group fairness constraints in heterogeneous client distributions.
Improve fairness and accuracy in computer vision applications.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Personalized Federated Learning for fairness
Client-level group fairness constraints
Optimal fairness-accuracy trade-off in FL
🔎 Similar Papers
No similar papers found.