Fairness Without Demographics in Human-Centered Federated Learning

📅 2024-04-30
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of ensuring cross-client fairness in federated learning when sensitive attributes (e.g., gender, race) are unknown—a critical gap in human-centered FL. Methodologically, we propose the first demographic-information-free federated fairness paradigm: (1) a Hessian-based fairness regularizer that minimizes the largest eigenvalue of the Hessian w.r.t. model parameters, implicitly constraining sensitivity to latent sensitive dimensions; and (2) an adaptive weighted aggregation mechanism integrating local error rates and loss curvature to mitigate the fairness–accuracy trade-off under heterogeneous bias. Evaluated on multiple real-world multi-source datasets, our approach reduces ΔDP by 37–62% without sacrificing model accuracy and demonstrates robustness to both single- and multi-source biases. This work establishes a deployable, sensitive-label-free pathway toward fairness in federated learning.

Technology Category

Application Category

📝 Abstract
Federated learning (FL) enables collaborative model training while preserving data privacy, making it suitable for decentralized human-centered AI applications. However, a significant research gap remains in ensuring fairness in these systems. Current fairness strategies in FL require knowledge of bias-creating/sensitive attributes, clashing with FL's privacy principles. Moreover, in human-centered datasets, sensitive attributes may remain latent. To tackle these challenges, we present a novel bias mitigation approach inspired by"Fairness without Demographics"in machine learning. The presented approach achieves fairness without needing knowledge of sensitive attributes by minimizing the top eigenvalue of the Hessian matrix during training, ensuring equitable loss landscapes across FL participants. Notably, we introduce a novel FL aggregation scheme that promotes participating models based on error rates and loss landscape curvature attributes, fostering fairness across the FL system. This work represents the first approach to attaining"Fairness without Demographics"in human-centered FL. Through comprehensive evaluation, our approach demonstrates effectiveness in balancing fairness and efficacy across various real-world applications, FL setups, and scenarios involving single and multiple bias-inducing factors, representing a significant advancement in human-centered FL.
Problem

Research questions and friction points this paper is trying to address.

Achieves fairness in Federated Learning without sensitive attributes
Aligns loss-landscape curvature within and across clients
Suitable for real-world human-sensing with unknown bias factors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Curvature regularization for local training
Sharpness-aware aggregation across clients
Fairness without demographic knowledge
🔎 Similar Papers
No similar papers found.