FedFACT: A Provable Framework for Controllable Group-Fairness Calibration in Federated Learning

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of jointly optimizing global fairness (across clients) and local fairness (within each client) in federated learning, where the accuracy–fairness trade-off is inherently uncontrollable. We propose the first provably controllable fairness calibration framework. Our approach unifies modeling of diverse global and local group fairness constraints, derives the Bayes-optimal fair classifier, and enables continuous, explicit adjustment of the accuracy–fairness trade-off. Methodologically, it integrates personalized cost-sensitive learning (in-processing) with bilevel optimization (post-processing), supported by theoretical guarantees on convergence and generalization. Extensive experiments across heterogeneous data settings demonstrate that our method significantly improves both global and local fairness while incurring minimal accuracy degradation—outperforming all existing baselines comprehensively.

Technology Category

Application Category

📝 Abstract
With emerging application of Federated Learning (FL) in decision-making scenarios, it is imperative to regulate model fairness to prevent disparities across sensitive groups (e.g., female, male). Current research predominantly focuses on two concepts of group fairness within FL: Global Fairness (overall model disparity across all clients) and Local Fairness (the disparity within each client). However, the non-decomposable, non-differentiable nature of fairness criteria pose two fundamental, unresolved challenges for fair FL: (i) Harmonizing global and local fairness in multi-class classification; (ii) Enabling a controllable, optimal accuracy-fairness trade-off. To tackle the aforementioned challenges, we propose a novel controllable federated group-fairness calibration framework, named FedFACT. FedFACT identifies the Bayes-optimal classifiers under both global and local fairness constraints in multi-class case, yielding models with minimal performance decline while guaranteeing fairness. To effectively realize an adjustable, optimal accuracy-fairness balance, we derive specific characterizations of the Bayes-optimal fair classifiers for reformulating fair FL as personalized cost-sensitive learning problem for in-processing, and bi-level optimization for post-processing. Theoretically, we provide convergence and generalization guarantees for FedFACT to approach the near-optimal accuracy under given fairness levels. Extensive experiments on multiple datasets across various data heterogeneity demonstrate that FedFACT consistently outperforms baselines in balancing accuracy and global-local fairness.
Problem

Research questions and friction points this paper is trying to address.

Harmonizing global and local fairness in federated learning
Achieving controllable accuracy-fairness trade-off in multi-class classification
Ensuring minimal performance decline while guaranteeing fairness constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayes-optimal classifiers under fairness constraints
Reformulates fair FL as cost-sensitive learning
Bi-level optimization for post-processing fairness
🔎 Similar Papers
No similar papers found.
L
Li Zhang
Zhejiang University
Zhongxuan Han
Zhongxuan Han
Zhejiang University
Recommendation systemFairness in machine learning
C
Chaochao chen
Zhejiang University
X
Xiaohua Feng
Zhejiang University
J
Jiaming Zhang
Zhejiang University
Y
Yuyuan Li
Hangzhou Dianzi University