๐ค AI Summary
This work addresses the critical challenge of knowledge pollution in existing machine unlearning methods, which often inadvertently degrade related knowledge and remain vulnerable to indirect unlearning attacksโposing serious risks to model accuracy in safety-critical applications. To this end, we propose ROKA, a robust unlearning framework grounded in neural repair and a novel neural knowledge system theory. ROKA enables precise removal of target data while actively reinforcing its conceptual neighborhood, thereby achieving knowledge balance. Notably, ROKA provides the first theoretical guarantee for knowledge retention during unlearning and introduces a data-manipulation-free model of indirect unlearning attacks along with an effective defense mechanism. Experiments across Vision Transformers, multimodal models, and large language models demonstrate that ROKA not only efficiently executes unlearning tasks but also maintains or even improves accuracy on retained data while effectively resisting indirect attacks.
๐ Abstract
The need for machine unlearning is critical for data privacy, yet existing methods often cause Knowledge Contamination by unintentionally damaging related knowledge. Such a degraded model performance after unlearning has been recently leveraged for new inference and backdoor attacks. Most studies design adversarial unlearning requests that require poisoning or duplicating training data. In this study, we introduce a new unlearning-induced attack model, namely indirect unlearning attack, which does not require data manipulation but exploits the consequence of knowledge contamination to perturb the model accuracy on security-critical predictions. To mitigate this attack, we introduce a theoretical framework that models neural networks as Neural Knowledge Systems. Based on this, we propose ROKA, a robust unlearning strategy centered on Neural Healing. Unlike conventional unlearning methods that only destroy information, ROKA constructively rebalances the model by nullifying the influence of forgotten data while strengthening its conceptual neighbors. To the best of our knowledge, our work is the first to provide a theoretical guarantee for knowledge preservation during unlearning. Evaluations on various large models, including vision transformers, multi-modal models, and large language models, show that ROKA effectively unlearns targets while preserving, or even enhancing, the accuracy of retained data, thereby mitigating the indirect unlearning attacks.