🤖 AI Summary
Federated learning faces challenges in jointly ensuring fairness and differential privacy. This paper proposes FedFDP, the first framework to embed fairness awareness directly into the gradient clipping process. It theoretically derives an optimal fairness regularization parameter and introduces a loss-value-driven adaptive clipping mechanism to reduce privacy budget consumption. Under strict $(varepsilon,delta)$-differential privacy guarantees, FedFDP simultaneously optimizes both individual fairness (e.g., Equalized Odds) and group fairness (e.g., Demographic Parity). Convergence is rigorously established via theoretical analysis. Extensive experiments on multiple benchmark datasets demonstrate state-of-the-art performance: accuracy improves by up to 3.2%, individual fairness disparity decreases by 41.7% on average, group fairness disparity drops by 38.5% on average, and privacy budget usage is reduced by 29.6%.
📝 Abstract
Federated learning (FL) is an emerging machine learning paradigm designed to address the challenge of data silos, attracting considerable attention. However, FL encounters persistent issues related to fairness and data privacy. To tackle these challenges simultaneously, we propose a fairness-aware federated learning algorithm called FedFair. Building on FedFair, we introduce differential privacy to create the FedFDP algorithm, which addresses trade-offs among fairness, privacy protection, and model performance. In FedFDP, we developed a fairness-aware gradient clipping technique to explore the relationship between fairness and differential privacy. Through convergence analysis, we identified the optimal fairness adjustment parameters to achieve both maximum model performance and fairness. Additionally, we present an adaptive clipping method for uploaded loss values to reduce privacy budget consumption. Extensive experimental results show that FedFDP significantly surpasses state-of-the-art solutions in both model performance and fairness.