Towards LLM Unlearning Resilient to Relearning Attacks: A Sharpness-Aware Minimization Perspective and Beyond

📅 2025-02-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from poor robustness in machine unlearning, remaining vulnerable to “relearning attacks”—where adversaries reconstruct forgotten information using only a few forgetting samples. Method: We propose the first sharpness-aware minimization (SAM)-based robust unlearning framework. Theoretically establishing an intrinsic link between parameter sharpness and relearning vulnerability, our approach explicitly regularizes loss surface flatness via sharpness-aware smoothing, thereby enhancing post-unlearning resistance against relearning attacks. We further extend the method to defend against input-level jailbreaking attacks. Results: Evaluated on the WMDP and MUSE benchmarks, our framework significantly improves unlearning robustness, effectively suppressing both relearning and jailbreaking behaviors. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
The LLM unlearning technique has recently been introduced to comply with data regulations and address the safety and ethical concerns of LLMs by removing the undesired data-model influence. However, state-of-the-art unlearning methods face a critical vulnerability: they are susceptible to ``relearning'' the removed information from a small number of forget data points, known as relearning attacks. In this paper, we systematically investigate how to make unlearned models robust against such attacks. For the first time, we establish a connection between robust unlearning and sharpness-aware minimization (SAM) through a unified robust optimization framework, in an analogy to adversarial training designed to defend against adversarial attacks. Our analysis for SAM reveals that smoothness optimization plays a pivotal role in mitigating relearning attacks. Thus, we further explore diverse smoothing strategies to enhance unlearning robustness. Extensive experiments on benchmark datasets, including WMDP and MUSE, demonstrate that SAM and other smoothness optimization approaches consistently improve the resistance of LLM unlearning to relearning attacks. Notably, smoothness-enhanced unlearning also helps defend against (input-level) jailbreaking attacks, broadening our proposal's impact in robustifying LLM unlearning. Codes are available at https://github.com/OPTML-Group/Unlearn-Smooth.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLM unlearning robustness
Mitigating relearning attacks via SAM
Smoothing strategies for unlearning resilience
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sharpness-Aware Minimization enhances unlearning
Smoothness optimization mitigates relearning attacks
SAM framework improves LLM unlearning robustness
🔎 Similar Papers
No similar papers found.