🤖 AI Summary
This work addresses the challenge of efficiently implementing the right to be forgotten in federated learning, where existing approaches often suffer from degraded model utility and the risk of relearning forgotten information. To this end, the authors propose FedCARE, a novel framework that introduces, for the first time, a conflict-aware projected gradient ascent mechanism to precisely remove the influence of data at client-, instance-, or class-level granularity. FedCARE further integrates a data-free model inversion technique to generate class-level proxy knowledge, enabling a relearning-resistant model recovery strategy. The method provides unified support for multi-granularity unlearning and significantly reduces computational overhead under both IID and non-IID settings, while effectively preserving model utility and suppressing relearning. Experimental results demonstrate its superior performance over current federated unlearning approaches.
📝 Abstract
Federated learning (FL) enables collaborative model training without centralizing raw data, but privacy regulations such as the right to be forgotten require FL systems to remove the influence of previously used training data upon request. Retraining a federated model from scratch is prohibitively expensive, motivating federated unlearning (FU). However, existing FU methods suffer from high unlearning overhead, utility degradation caused by entangled knowledge, and unintended relearning during post-unlearning recovery. In this paper, we propose FedCARE, a unified and low overhead FU framework that enables conflict-aware unlearning and relearning-resistant recovery. FedCARE leverages gradient ascent for efficient forgetting when target data are locally available and employs data free model inversion to construct class level proxies of shared knowledge. Based on these insights, FedCARE integrates a pseudo-sample generator, conflict-aware projected gradient ascent for utility preserving unlearning, and a recovery strategy that suppresses rollback toward the pre-unlearning model. FedCARE supports client, instance, and class level unlearning with modest overhead. Extensive experiments on multiple datasets and model architectures under both IID and non-IID settings show that FedCARE achieves effective forgetting, improved utility retention, and reduced relearning risk compared to state of the art FU baselines.