🤖 AI Summary
Machine unlearning for large language models (LLMs) demands efficient removal of specific undesirable behaviors or data influences without full retraining, while mitigating gradient explosion and catastrophic forgetting.
Method: We propose a multi-objective optimization framework that jointly optimizes three goals: (i) eliminating target data influence via gradient ascent on a reconstruction-based cross-entropy loss, (ii) stabilizing gradients through explicit regularization, and (iii) preserving original task performance. We introduce a dedicated unlearning loss and a cooperative descent update direction to harmonize these objectives.
Contribution/Results: This is the first work to formulate LLM unlearning as a multi-objective optimization problem. Our method avoids the trade-off pitfalls inherent in single-objective gradient-ascent approaches. On multiple benchmarks, it significantly outperforms state-of-the-art gradient-ascent-based methods: achieving higher unlearning success rates while maintaining superior generalization performance on retained tasks.
📝 Abstract
Machine unlearning in the domain of large language models (LLMs) has attracted great attention recently, which aims to effectively eliminate undesirable behaviors from LLMs without full retraining from scratch. In this paper, we explore the Gradient Ascent (GA) approach in LLM unlearning, which is a proactive way to decrease the prediction probability of the model on the target data in order to remove their influence. We analyze two challenges that render the process impractical: gradient explosion and catastrophic forgetting. To address these issues, we propose Multi-Objective Large Language Model Unlearning (MOLLM) algorithm. We first formulate LLM unlearning as a multi-objective optimization problem, in which the cross-entropy loss is modified to the unlearning version to overcome the gradient explosion issue. A common descent update direction is then calculated, which enables the model to forget the target data while preserving the utility of the LLM. Our empirical results verify that MoLLM outperforms the SOTA GA-based LLM unlearning methods in terms of unlearning effect and model utility preservation.