When the Server Steps In: Calibrated Updates for Fair Federated Learning

📅 2026-01-08
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the issue of unfair bias in global models caused by client data heterogeneity in federated learning. To mitigate this problem without altering client-side training procedures or communication protocols, the authors propose EquFL, a server-side calibration mechanism compatible with the standard FedAvg framework. EquFL corrects model bias by optimizing a fairness-aware loss function during aggregation. Theoretically, the method is shown to converge to the same optimal solution as FedAvg, ensuring no degradation in convergence guarantees. Empirical evaluations demonstrate that EquFL significantly reduces systemic bias across diverse client groups while maintaining comparable convergence performance, thereby enhancing model fairness without compromising efficiency.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) has emerged as a transformative distributed learning paradigm, enabling multiple clients to collaboratively train a global model under the coordination of a central server without sharing their raw training data. While FL offers notable advantages, it faces critical challenges in ensuring fairness across diverse demographic groups. To address these fairness concerns, various fairness-aware debiasing methods have been proposed. However, many of these approaches either require modifications to clients'training protocols or lack flexibility in their aggregation strategies. In this work, we address these limitations by introducing EquFL, a novel server-side debiasing method designed to mitigate bias in FL systems. EquFL operates by allowing the server to generate a single calibrated update after receiving model updates from the clients. This calibrated update is then integrated with the aggregated client updates to produce an adjusted global model that reduces bias. Theoretically, we establish that EquFL converges to the optimal global model achieved by FedAvg and effectively reduces fairness loss over training rounds. Empirically, we demonstrate that EquFL significantly mitigates bias within the system, showcasing its practical effectiveness.
Problem

Research questions and friction points this paper is trying to address.

fairness
federated learning
bias mitigation
server-side calibration
demographic fairness
Innovation

Methods, ideas, or system contributions that make the work stand out.

fair federated learning
server-side debiasing
calibrated update
bias mitigation
federated aggregation
🔎 Similar Papers
No similar papers found.