A Survey on Group Fairness in Federated Learning: Challenges, Taxonomy of Solutions and Directions for Future Research

📅 2024-10-04
🏛️ arXiv.org
📈 Citations: 5
Influential: 0
📄 PDF
🤖 AI Summary
Client data heterogeneity in federated learning exacerbates group unfairness with respect to sensitive attributes (e.g., race, gender). Method: This paper introduces the first taxonomy for fairness-aware federated learning, structured along three dimensions—data partitioning, strategic deployment location, and fairness mechanism design—and incorporates novel aspects such as intersectional sensitive attribute modeling and multi-group balanced optimization. Through bibliometric analysis, normalized cross-method evaluation of fairness metrics, and application-scenario mapping, we systematically survey 47 works, unifying benchmarking protocols and dataset paradigms. Contribution/Results: We identify three critical research gaps: (1) dynamic intersectional sensitive attribute modeling, (2) low-communication-cost fair aggregation, and (3) verifiable fairness guarantees. Our taxonomy and empirical synthesis provide a comprehensive foundation for algorithm design, systems implementation, and rigorous empirical evaluation in fairness-aware federated learning.

Technology Category

Application Category

📝 Abstract
Group fairness in machine learning is a critical area of research focused on achieving equitable outcomes across different groups defined by sensitive attributes such as race or gender. Federated learning, a decentralized approach to training machine learning models across multiple devices or organizations without sharing raw data, amplifies the need for fairness due to the heterogeneous data distributions across clients, which can exacerbate biases. The intersection of federated learning and group fairness has attracted significant interest, with 47 research works specifically dedicated to addressing this issue. However, no dedicated survey has focused comprehensively on group fairness in federated learning. In this work, we present an in-depth survey on this topic, addressing the critical challenges and reviewing related works in the field. We create a novel taxonomy of these approaches based on key criteria such as data partitioning, location, and applied strategies. Additionally, we explore broader concerns related to this problem and investigate how different approaches handle the complexities of various sensitive groups and their intersections. Finally, we review the datasets and applications commonly used in current research. We conclude by highlighting key areas for future research, emphasizing the need for more methods to address the complexities of achieving group fairness in federated systems.
Problem

Research questions and friction points this paper is trying to address.

Addressing group fairness challenges in federated learning systems
Surveying methodologies for equitable outcomes across sensitive attributes
Proposing taxonomy and solutions for bias in decentralized data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Survey on group fairness challenges
Novel taxonomy based on data partitioning
Methods for equitable federated learning outcomes
🔎 Similar Papers
No similar papers found.
T
Teresa Salazar
Centre for Informatics and Systems, Department of Informatics Engineering of the University of Coimbra, Coimbra, 3030-790, Portugal.
H
Helder Ara'ujo
Institute of Systems and Robotics, Department of Electrical and Computer Engineering of the University of Coimbra, Coimbra, 3030-790, Portugal.
Alberto Cano
Alberto Cano
Associate Vice President for Research Computing, Virginia Tech, USA
Machine LearningData Stream MiningConcept DriftMulti-label learningGPU
Pedro Abreu
Pedro Abreu
Centre for Informatics and Systems, Department of Informatics Engineering of the University of Coimbra, Coimbra, 3030-790, Portugal.