🤖 AI Summary
This work addresses the challenge of jointly ensuring both global and local fairness in multi-class federated learning (FL), a problem previously unaddressed in a unified framework. We formally model both fairness constraints within a single optimization objective and quantify their minimal accuracy trade-off. Targeting high-stakes domains—including finance, hiring, and healthcare—we propose a lightweight, post-hoc algorithm grounded in the Bayes-optimal score function. Theoretically guaranteed to be optimal, it incurs negligible computational overhead and zero additional communication cost. Crucially, our method integrates fairness-aware constraint optimization with multi-class calibration, requiring no modifications to local training procedures. Evaluated on benchmark multi-class fair FL tasks, our approach achieves state-of-the-art performance across three critical dimensions: accuracy–fairness trade-off, computational efficiency, and communication efficiency—establishing a triple-optimal balance.
📝 Abstract
With the emerging application of Federated Learning (FL) in finance, hiring and healthcare, FL models are regulated to be fair, preventing disparities with respect to legally protected attributes such as race or gender. Two concepts of fairness are important in FL: global and local fairness. Global fairness addresses the disparity across the entire population and local fairness is concerned with the disparity within each client. Prior fair FL frameworks have improved either global or local fairness without considering both. Furthermore, while the majority of studies on fair FL focuses on binary settings, many real-world applications are multi-class problems. This paper proposes a framework that investigates the minimum accuracy lost for enforcing a specified level of global and local fairness in multi-class FL settings. Our framework leads to a simple post-processing algorithm that derives fair outcome predictors from the Bayesian optimal score functions. Experimental results show that our algorithm outperforms the current state of the art (SOTA) with regard to the accuracy-fairness tradoffs, computational and communication costs. Codes are available at: https://github.com/papersubmission678/The-cost-of-local-and-global-fairness-in-FL .