Downgrade to Upgrade: Optimizer Simplification Enhances Robustness in LLM Unlearning

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Unlearning in large language models (LLMs) is highly fragile—subsequent weight quantization or fine-tuning often undermines forgetting efficacy. Method: This work first identifies a negative correlation between optimizer “rank” and unlearning robustness, proposing a novel paradigm: reducing optimizer rank enhances perturbation resilience. We design a hybrid zeroth- and first-order optimizer integrating stochastic smoothing and gradient sign compression to construct a more robust parameter update mechanism. Contribution/Results: Evaluated on MUSE and WMDP benchmarks, our approach significantly improves the robustness of multiple state-of-the-art unlearning algorithms—without compromising forgetting accuracy or model utility. It advances trustworthy AI by enabling controllable, resilient unlearning and provides a practical, implementation-ready tool for real-world deployment.

Technology Category

Application Category

📝 Abstract
Large language model (LLM) unlearning aims to surgically remove the influence of undesired data or knowledge from an existing model while preserving its utility on unrelated tasks. This paradigm has shown promise in addressing privacy and safety concerns. However, recent findings reveal that unlearning effects are often fragile: post-unlearning manipulations such as weight quantization or fine-tuning can quickly neutralize the intended forgetting. Prior efforts to improve robustness primarily reformulate unlearning objectives by explicitly assuming the role of vulnerability sources. In this work, we take a different perspective by investigating the role of the optimizer, independent of unlearning objectives and formulations, in shaping unlearning robustness. We show that the 'grade' of the optimizer, defined by the level of information it exploits, ranging from zeroth-order (gradient-free) to first-order (gradient-based) to second-order (Hessian-based), is tightly linked to the resilience of unlearning. Surprisingly, we find that downgrading the optimizer, such as using zeroth-order methods or compressed-gradient variants (e.g., gradient sign-based optimizers), often leads to stronger robustness. While these optimizers produce noisier and less precise updates, they encourage convergence to harder-to-disturb basins in the loss landscape, thereby resisting post-training perturbations. By connecting zeroth-order methods with randomized smoothing, we further highlight their natural advantage for robust unlearning. Motivated by these insights, we propose a hybrid optimizer that combines first-order and zeroth-order updates, preserving unlearning efficacy while enhancing robustness. Extensive experiments on the MUSE and WMDP benchmarks, across multiple LLM unlearning algorithms, validate that our approach achieves more resilient forgetting without sacrificing unlearning quality.
Problem

Research questions and friction points this paper is trying to address.

Enhancing robustness of LLM unlearning against post-training manipulations
Investigating optimizer simplification's impact on unlearning resilience
Developing hybrid optimizer to maintain efficacy while improving robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Downgrading optimizer to enhance unlearning robustness
Hybrid optimizer combining first and zeroth-order updates
Converging to harder-to-disturb loss landscape basins
🔎 Similar Papers
No similar papers found.
Y
Yicheng Lang
The OPTML Lab, Dept. CSE, Michigan State University
Yihua Zhang
Yihua Zhang
Ph.D. Student, Michigan State University
Machine LearningDeep Learning
Chongyu Fan
Chongyu Fan
Michigan State University
Post trainingAlignment
C
Changsheng Wang
The OPTML Lab, Dept. CSE, Michigan State University
Jinghan Jia
Jinghan Jia
Michigan State University
Machine LearningGenerative AIAI SafetyEfficient AI.
S
Sijia Liu
The OPTML Lab, Dept. CSE, Michigan State University, MIT-IBM Watson AI Lab, IBM Research