🤖 AI Summary
Existing federated machine unlearning (FMU) suffers from low efficiency, coarse-grained forgetting, and reliance on costly retraining. To address these limitations, this paper proposes FedAU, the first framework to introduce the “unlearning-during-training” paradigm. Its core innovation is a lightweight auxiliary offloading module that enables real-time, rollback-free unlearning via a linear parameter correction mechanism. FedAU supports concurrent fine-grained unlearning at the sample, class, and client levels, and introduces a federated coordinated unlearning protocol to ensure global consistency. Extensive experiments on MNIST, CIFAR-10, and CIFAR-100 demonstrate >98% unlearning success rate, <1.2% model accuracy degradation, and negligible inference overhead. Crucially, FedAU natively integrates unlearning capability into the training phase—marking the first such design in federated learning—thereby substantially enhancing the practicality and scalability of the “right to be forgotten” in federated settings.
📝 Abstract
In recent years, Federated Learning (FL) has garnered significant attention as a distributed machine learning paradigm. To facilitate the implementation of the "right to be forgotten," the concept of federated machine unlearning (FMU) has also emerged. However, current FMU approaches often involve additional time-consuming steps and may not offer comprehensive unlearning capabilities, which renders them less practical in real FL scenarios. In this paper, we introduce FedAU, an innovative and efficient FMU framework aimed at overcoming these limitations. Specifically, FedAU incorporates a lightweight auxiliary unlearning module into the learning process and employs a straightforward linear operation to facilitate unlearning. This approach eliminates the requirement for extra time-consuming steps, rendering it well-suited for FL.
Furthermore, FedAU exhibits remarkable versatility. It not only enables multiple clients to carry out unlearning tasks concurrently but also supports unlearning at various levels of granularity, including individual data samples, specific classes, and even at the client level. We conducted extensive experiments on MNIST, CIFAR10, and CIFAR100 datasets to evaluate the performance of FedAU. The results demonstrate that FedAU effectively achieves the desired unlearning effect while maintaining model accuracy.