🤖 AI Summary
In federated learning (FL), data heterogeneity degrades both local accuracy and fairness—particularly group fairness—of a single global model, while global and local fairness objectives often conflict. To address this, we propose a clustering-driven personalized FL framework that, for the first time, incorporates local group fairness metrics directly into client clustering assignment. This implicitly models the intrinsic alignment between personalization and local fairness, enabling fairness enhancement without explicit fairness constraints or intervention. By jointly optimizing cluster formation and personalized model training, our method supports tunable trade-offs between accuracy and fairness. Theoretical analysis and extensive experiments on multiple benchmarks demonstrate that personalization itself improves local fairness; moreover, our approach matches or surpasses state-of-the-art locally fair FL methods in both local accuracy and group fairness.
📝 Abstract
Federated Learning (FL) has been a pivotal paradigm for collaborative training of machine learning models across distributed datasets. In heterogeneous settings, it has been observed that a single shared FL model can lead to low local accuracy, motivating personalized FL algorithms. In parallel, fair FL algorithms have been proposed to enforce group fairness on the global models. Again, in heterogeneous settings, global and local fairness do not necessarily align, motivating the recent literature on locally fair FL. In this paper, we propose new FL algorithms for heterogeneous settings, spanning the space between personalized and locally fair FL. Building on existing clustering-based personalized FL methods, we incorporate a new fairness metric into cluster assignment, enabling a tunable balance between local accuracy and fairness. Our methods match or exceed the performance of existing locally fair FL approaches, without explicit fairness intervention. We further demonstrate (numerically and analytically) that personalization alone can improve local fairness and that our methods exploit this alignment when present.