🤖 AI Summary
This work addresses the fairness challenge in heterogeneous federated learning, where divergent client data distributions lead to uneven performance gains of the global model across local datasets. To mitigate this issue, the authors propose EAGLE, an algorithm that introduces “loss gap”—the performance discrepancy between the global model and each client’s local optimum—as an explicit fairness objective. By minimizing this gap through regularization, EAGLE prioritizes relative improvement over absolute loss alignment, thereby avoiding utility degradation for the majority of clients. The method integrates loss gap regularization, client priority scheduling, and a non-convex optimization framework, accompanied by theoretical convergence guarantees. Empirical results demonstrate that EAGLE consistently reduces the dispersion of loss gaps across clients in both convex and non-convex settings while maintaining overall model performance on par with strong baselines.
📝 Abstract
While clients may join federated learning to improve performance on data they rarely observe locally, they often remain self-interested, expecting the global model to perform well on their own data. This motivates an objective that ensures all clients achieve a similar loss gap -the difference in performance between the global model and the best model they could train using only their local data-. To this end, we propose EAGLE, a novel federated learning algorithm that explicitly regularizes the global model to minimize disparities in loss gaps across clients. Our approach is particularly effective in heterogeneous settings, where the optimal local models of the clients may be misaligned. Unlike existing methods that encourage loss parity, potentially degrading performance for many clients, EAGLE targets fairness in relative improvements. We provide theoretical convergence guarantees for EAGLE under non-convex loss functions, and characterize how its iterates perform relative to the standard federated learning objective using a novel heterogeneity measure. Empirically, we demonstrate that EAGLE reduces the disparity in loss gaps among clients by prioritizing those furthest from their local optimal loss, while maintaining competitive utility in both convex and non-convex cases compared to strong baselines.