π€ AI Summary
This work addresses the fairness challenges in federated learning arising from client heterogeneity, which often leads to uneven model performance across participants. To tackle this issue, the paper proposes a systematic taxonomy framework that unifies performance-oriented and capability-oriented fairness strategies, clarifying the technical pathways of existing approaches. Through a comprehensive literature review and taxonomic analysis, the authors construct a structured evaluation metric system for fairness, identifying key challenges and outlining promising directions for future research. This study provides a coherent theoretical foundation, a unified classification perspective, and a forward-looking roadmap to advance fairness-aware federated learning.
π Abstract
Fairness in Federated Learning (FL) is emerging as a critical factor driven by heterogeneous clientsβ constraints and balanced model performance across various scenarios. In this survey, we delineate a comprehensive classification of the state-of-the-art fairness-aware approaches from a multifaceted perspective, i.e., model performance-oriented and capability-oriented. Moreover, we provide a framework to categorize and address various fairness concerns and associated technical aspects, examining their effectiveness in balancing equity and performance within FL frameworks. We further examine several significant evaluation metrics leveraged to measure fairness quantitatively. Finally, we explore exciting open research directions and propose prospective solutions that could drive future advancements in this important area, laying a solid foundation for researchers working toward fairness in FL.