๐ค AI Summary
This work exposes a critical security vulnerability in federated learning: the โright to be forgottenโ can be maliciously exploited to launch a novel Federated Model Unlearning Attack (FedMUA). In this attack, adversaries induce excessive unlearning of critical features from the global model, severely degrading prediction accuracy on target samples. We propose a two-stage attack paradigm: first identifying the most influential training samples for the target instance, then generating feature-level malicious unlearning requests. To counter such threats, we design a robust aggregation defense mechanism that mitigates anomalous unlearning effects. Extensive evaluation across three real-world datasets demonstrates that FedMUA achieves an 80% attack success rate with only 0.3% malicious unlearning requests. This is the first systematic study revealing the inherent security fragility of federated unlearning mechanisms. Our findings provide crucial security insights and practical defensive strategies toward trustworthy federated learning.
๐ Abstract
Recently, the practical needs of ``the right to be forgotten'' in federated learning gave birth to a paradigm known as federated unlearning, which enables the server to forget personal data upon the client's removal request. Existing studies on federated unlearning have primarily focused on efficiently eliminating the influence of requested data from the client's model without retraining from scratch, however, they have rarely doubted the reliability of the global model posed by the discrepancy between its prediction performance before and after unlearning. To bridge this gap, we take the first step by introducing a novel malicious unlearning attack dubbed FedMUA, aiming to unveil potential vulnerabilities emerging from federated learning during the unlearning process. The crux of FedMUA is to mislead the global model into unlearning more information associated with the influential samples for the target sample than anticipated, thus inducing adverse effects on target samples from other clients. To achieve this, we design a novel two-step method, known as Influential Sample Identification and Malicious Unlearning Generation, to identify and subsequently generate malicious feature unlearning requests within the influential samples. By doing so, we can significantly alter the predictions pertaining to the target sample by initiating the malicious feature unlearning requests, leading to the deliberate manipulation for the user adversely. Additionally, we design a new defense mechanism that is highly resilient against malicious unlearning attacks. Extensive experiments on three realistic datasets reveal that FedMUA effectively induces misclassification on target samples and can achieve an 80% attack success rate by triggering only 0.3% malicious unlearning requests.