🤖 AI Summary
Federated unlearning faces two practical challenges: (1) existing federated unlearning (FU) methods lack fairness, often imposing mandatory retraining or coarse-grained approximations (e.g., gradient ascent or knowledge distillation) on clients not requesting unlearning, thereby degrading their model performance; and (2) evaluations predominantly rely on idealized synthetic IID/non-IID data, neglecting real-world data heterogeneity and yielding misleading conclusions. This paper presents the first systematic evaluation of unlearning efficacy under realistic heterogeneous data distributions. We propose FedCCCU, a framework leveraging cross-client constrained optimization to enable fine-grained, selective unlearning—preserving performance for retained-data clients while enhancing overall fairness. Extensive experiments across diverse real-world non-IID settings demonstrate that FedCCCU significantly outperforms state-of-the-art baselines, achieving superior unlearning effectiveness, improved model fairness, and strong scalability.
📝 Abstract
Machine unlearning is critical for enforcing data deletion rights like the "right to be forgotten." As a decentralized paradigm, Federated Learning (FL) also requires unlearning, but realistic implementations face two major challenges. First, fairness in Federated Unlearning (FU) is often overlooked. Exact unlearning methods typically force all clients into costly retraining, even those uninvolved. Approximate approaches, using gradient ascent or distillation, make coarse interventions that can unfairly degrade performance for clients with only retained data. Second, most FU evaluations rely on synthetic data assumptions (IID/non-IID) that ignore real-world heterogeneity. These unrealistic benchmarks obscure the true impact of unlearning and limit the applicability of current methods. We first conduct a comprehensive benchmark of existing FU methods under realistic data heterogeneity and fairness conditions. We then propose a novel, fairness-aware FU approach, Federated Cross-Client-Constrains Unlearning (FedCCCU), to explicitly address both challenges. FedCCCU offers a practical and scalable solution for real-world FU. Experimental results show that existing methods perform poorly in realistic settings, while our approach consistently outperforms them.