🤖 AI Summary
This work addresses the privacy risks inherent in machine unlearning, where adversaries can exploit unlearning inversion attacks to reconstruct deleted data. To counter this threat, we propose UnlearnShield—the first defense mechanism specifically designed against unlearning inversion attacks. By introducing directional perturbations in the cosine representation space and integrating a constrained optimization module, UnlearnShield effectively suppresses the leakage of sensitive information. Our approach simultaneously preserves model accuracy and unlearning efficacy while significantly reducing the risk of data reconstruction, thereby achieving a synergistic balance among privacy preservation, model performance, and unlearning effectiveness. Experimental results demonstrate that UnlearnShield maintains a strong equilibrium across these three critical dimensions.
📝 Abstract
Machine unlearning is an emerging technique that aims to remove the influence of specific data from trained models, thereby enhancing privacy protection. However, recent research has uncovered critical privacy vulnerabilities, showing that adversaries can exploit unlearning inversion to reconstruct data that was intended to be erased. Despite the severity of this threat, dedicated defenses remain lacking. To address this gap, we propose UnlearnShield, the first defense specifically tailored to counter unlearning inversion. UnlearnShield introduces directional perturbations in the cosine representation space and regulates them through a constraint module to jointly preserve model accuracy and forgetting efficacy, thereby reducing inversion risk while maintaining utility. Experiments demonstrate that it achieves a good trade-off among privacy protection, accuracy, and forgetting.