Fairness in Federated Learning: Fairness for Whom?

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper identifies five methodological shortcomings in current federated learning (FL) fairness research: overreliance on the server–client architecture; detachment from real-world socio-technical contexts; conflation of system-level protections with individual user rights; interventions confined to isolated lifecycle stages; and neglect of multi-stakeholder alignment. To address these, we propose a harm-centered fairness framework emphasizing context-sensitive fairness definitions, end-to-end risk mapping across the FL lifecycle, and stakeholder-aligned governance. Through systematic literature annotation and cross-dimensional qualitative coding—spanning fairness definitions, design principles, evaluation methodologies, and application scenarios—we categorize prevalent methodological deviations. Our analysis uncovers a fundamental misalignment between prevailing fairness formalisms and operational realities in FL deployments. The study provides critical guidance for developing responsible, accountable, and contextually grounded FL fairness theory and practice.

Technology Category

Application Category

📝 Abstract
Fairness in federated learning has emerged as a rapidly growing area of research, with numerous works proposing formal definitions and algorithmic interventions. Yet, despite this technical progress, fairness in FL is often defined and evaluated in ways that abstract away from the sociotechnical contexts in which these systems are deployed. In this paper, we argue that existing approaches tend to optimize narrow system level metrics, such as performance parity or contribution-based rewards, while overlooking how harms arise throughout the FL lifecycle and how they impact diverse stakeholders. We support this claim through a critical analysis of the literature, based on a systematic annotation of papers for their fairness definitions, design decisions, evaluation practices, and motivating use cases. Our analysis reveals five recurring pitfalls: 1) fairness framed solely through the lens of server client architecture, 2) a mismatch between simulations and motivating use-cases and contexts, 3) definitions that conflate protecting the system with protecting its users, 4) interventions that target isolated stages of the lifecycle while neglecting upstream and downstream effects, 5) and a lack of multi-stakeholder alignment where multiple fairness definitions can be relevant at once. Building on these insights, we propose a harm centered framework that links fairness definitions to concrete risks and stakeholder vulnerabilities. We conclude with recommendations for more holistic, context-aware, and accountable fairness research in FL.
Problem

Research questions and friction points this paper is trying to address.

Fairness definitions overlook sociotechnical deployment contexts
Existing approaches optimize narrow metrics, ignoring lifecycle harms
Lack of multi-stakeholder alignment in fairness definitions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Harm-centered framework for fairness definitions
Critical analysis of fairness literature pitfalls
Multi-stakeholder alignment in FL fairness