🤖 AI Summary
Existing machine unlearning methods for large language models (LLMs) suffer from high computational overhead, poor generalizability, or catastrophic forgetting—posing significant ethical and legal risks due to residual sensitive information in model parameters.
Method: We propose a lightweight, non-invasive RAG-based unlearning framework that formulates unlearning as a constrained optimization problem under dynamic knowledge base editing. Crucially, we introduce RAG as a plug-and-play unlearning interface—the first approach enabling effective unlearning on proprietary LLMs (e.g., ChatGPT, Gemini).
Contribution/Results: The framework is systematically evaluated across five mainstream LLMs—including closed-source models—and fully satisfies the five core unlearning desiderata: effectiveness, universality, harmlessness, simplicity, and robustness. It further supports seamless extension to multimodal LLMs and LLM-based agents, offering a practical, scalable solution for responsible AI deployment.
📝 Abstract
The deployment of large language models (LLMs) like ChatGPT and Gemini has shown their powerful natural language generation capabilities. However, these models can inadvertently learn and retain sensitive information and harmful content during training, raising significant ethical and legal concerns. To address these issues, machine unlearning has been introduced as a potential solution. While existing unlearning methods take into account the specific characteristics of LLMs, they often suffer from high computational demands, limited applicability, or the risk of catastrophic forgetting. To address these limitations, we propose a lightweight unlearning framework based on Retrieval-Augmented Generation (RAG) technology. By modifying the external knowledge base of RAG, we simulate the effects of forgetting without directly interacting with the unlearned LLM. We approach the construction of unlearned knowledge as a constrained optimization problem, deriving two key components that underpin the effectiveness of RAG-based unlearning. This RAG-based approach is particularly effective for closed-source LLMs, where existing unlearning methods often fail. We evaluate our framework through extensive experiments on both open-source and closed-source models, including ChatGPT, Gemini, Llama-2-7b-chat-hf, and PaLM 2. The results demonstrate that our approach meets five key unlearning criteria: effectiveness, universality, harmlessness, simplicity, and robustness. Meanwhile, this approach can extend to multimodal large language models and LLM-based agents.