Multi-Objective Large Language Model Unlearning

📅 2024-12-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Machine unlearning for large language models (LLMs) demands efficient removal of specific undesirable behaviors or data influences without full retraining, while mitigating gradient explosion and catastrophic forgetting. Method: We propose a multi-objective optimization framework that jointly optimizes three goals: (i) eliminating target data influence via gradient ascent on a reconstruction-based cross-entropy loss, (ii) stabilizing gradients through explicit regularization, and (iii) preserving original task performance. We introduce a dedicated unlearning loss and a cooperative descent update direction to harmonize these objectives. Contribution/Results: This is the first work to formulate LLM unlearning as a multi-objective optimization problem. Our method avoids the trade-off pitfalls inherent in single-objective gradient-ascent approaches. On multiple benchmarks, it significantly outperforms state-of-the-art gradient-ascent-based methods: achieving higher unlearning success rates while maintaining superior generalization performance on retained tasks.

Technology Category

Application Category

📝 Abstract
Machine unlearning in the domain of large language models (LLMs) has attracted great attention recently, which aims to effectively eliminate undesirable behaviors from LLMs without full retraining from scratch. In this paper, we explore the Gradient Ascent (GA) approach in LLM unlearning, which is a proactive way to decrease the prediction probability of the model on the target data in order to remove their influence. We analyze two challenges that render the process impractical: gradient explosion and catastrophic forgetting. To address these issues, we propose Multi-Objective Large Language Model Unlearning (MOLLM) algorithm. We first formulate LLM unlearning as a multi-objective optimization problem, in which the cross-entropy loss is modified to the unlearning version to overcome the gradient explosion issue. A common descent update direction is then calculated, which enables the model to forget the target data while preserving the utility of the LLM. Our empirical results verify that MoLLM outperforms the SOTA GA-based LLM unlearning methods in terms of unlearning effect and model utility preservation.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Selective Forgetting
Gradient Explosion
Innovation

Methods, ideas, or system contributions that make the work stand out.

MOLLM
Multi-objective Optimization
Gradient Ascent Improvements
🔎 Similar Papers
Zibin Pan
Zibin Pan
School of Science and Engineering, CUHKSZ, Shenzhen, China; The Cyberspace Academy of Guangzhou University
S
Shuwen Zhang
School of Science and Engineering, CUHKSZ, Shenzhen, China
Y
Yuesheng Zheng
School of Science and Engineering, CUHKSZ, Shenzhen, China
C
Chi-Ruei Li
School of Data Science, CUHKSZ, Shenzhen, China
Yuheng Cheng
Yuheng Cheng
CUHK(SZ)
J
Junhua Zhao
School of Science and Engineering, CUHKSZ, Shenzhen, China