🤖 AI Summary
This work uncovers a novel stealthy backdoor threat in machine unlearning: even when the forgetting set is entirely clean (i.e., uncontaminated), an attacker can inject weak, distributed malicious signals during initial training and later activate and amplify the backdoor *via* selective unlearning of specific benign samples. We introduce the “clean-forgetting-triggered backdoor” paradigm—transforming legitimate unlearning operations into attack amplifiers and breaking the conventional dependence of backdoor attacks on data poisoning. Our method requires no model architecture or objective modification, integrating multi-source weak signal embedding, forgetting-schedule optimization, and gradient sensitivity analysis. Evaluated on CIFAR-10, CIFAR-100, and ImageNet subsets, it achieves >92% attack success rate, while existing unlearning verification and backdoor detection methods identify it with <8% accuracy—revealing a critical security blind spot in current machine unlearning systems.
📝 Abstract
Machine unlearning has emerged as a key component in ensuring ``Right to be Forgotten'', enabling the removal of specific data points from trained models. However, even when the unlearning is performed without poisoning the forget-set (clean unlearning), it can be exploited for stealthy attacks that existing defenses struggle to detect. In this paper, we propose a novel {em clean} backdoor attack that exploits both the model learning phase and the subsequent unlearning requests. Unlike traditional backdoor methods, during the first phase, our approach injects a weak, distributed malicious signal across multiple classes. The real attack is then activated and amplified by selectively unlearning {em non-poisoned} samples. This strategy results in a powerful and stealthy novel attack that is hard to detect or mitigate, highlighting critical vulnerabilities in current unlearning mechanisms and highlighting the need for more robust defenses.