When Machine Unlearning Meets Retrieval-Augmented Generation (RAG): Keep Secret or Forget Knowledge?

📅 2024-10-20
🏛️ arXiv.org
📈 Citations: 8
Influential: 1
📄 PDF
🤖 AI Summary
Existing machine unlearning methods for large language models (LLMs) suffer from high computational overhead, poor generalizability, or catastrophic forgetting—posing significant ethical and legal risks due to residual sensitive information in model parameters. Method: We propose a lightweight, non-invasive RAG-based unlearning framework that formulates unlearning as a constrained optimization problem under dynamic knowledge base editing. Crucially, we introduce RAG as a plug-and-play unlearning interface—the first approach enabling effective unlearning on proprietary LLMs (e.g., ChatGPT, Gemini). Contribution/Results: The framework is systematically evaluated across five mainstream LLMs—including closed-source models—and fully satisfies the five core unlearning desiderata: effectiveness, universality, harmlessness, simplicity, and robustness. It further supports seamless extension to multimodal LLMs and LLM-based agents, offering a practical, scalable solution for responsible AI deployment.

Technology Category

Application Category

📝 Abstract
The deployment of large language models (LLMs) like ChatGPT and Gemini has shown their powerful natural language generation capabilities. However, these models can inadvertently learn and retain sensitive information and harmful content during training, raising significant ethical and legal concerns. To address these issues, machine unlearning has been introduced as a potential solution. While existing unlearning methods take into account the specific characteristics of LLMs, they often suffer from high computational demands, limited applicability, or the risk of catastrophic forgetting. To address these limitations, we propose a lightweight unlearning framework based on Retrieval-Augmented Generation (RAG) technology. By modifying the external knowledge base of RAG, we simulate the effects of forgetting without directly interacting with the unlearned LLM. We approach the construction of unlearned knowledge as a constrained optimization problem, deriving two key components that underpin the effectiveness of RAG-based unlearning. This RAG-based approach is particularly effective for closed-source LLMs, where existing unlearning methods often fail. We evaluate our framework through extensive experiments on both open-source and closed-source models, including ChatGPT, Gemini, Llama-2-7b-chat-hf, and PaLM 2. The results demonstrate that our approach meets five key unlearning criteria: effectiveness, universality, harmlessness, simplicity, and robustness. Meanwhile, this approach can extend to multimodal large language models and LLM-based agents.
Problem

Research questions and friction points this paper is trying to address.

Preventing LLMs from retaining sensitive training data through unlearning
Reducing computational costs of machine unlearning while maintaining effectiveness
Enabling unlearning for closed-source models where traditional methods fail
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight behavioral unlearning via RAG modification
Constrained optimization for unlearned knowledge construction
Simulates forgetting without direct LLM interaction
🔎 Similar Papers
No similar papers found.