🤖 AI Summary
Large language models (LLMs) struggle to precisely forget specific knowledge—such as private information, biases, or outdated facts—without compromising general capabilities.
Method: This paper proposes a parameter-efficient knowledge forgetting framework that freezes the backbone model and optimizes only low-rank adaptation (LoRA) modules. It introduces a negative-sample-driven intermediate representation suppression mechanism to selectively inhibit or replace target knowledge.
Contribution/Results: To our knowledge, this is the first work to deeply integrate LoRA with negative-sample training for knowledge forgetting, enabling fine-grained, localized low-rank updates instead of costly full fine-tuning or direct weight editing. Experiments across diverse factual forgetting benchmarks show performance on par with full fine-tuning and weight-editing methods, while reducing computational overhead by approximately 90%. The approach thus achieves a favorable trade-off among effectiveness, efficiency, and deployability.
📝 Abstract
Large language models (LLMs) possess vast knowledge acquired from extensive training corpora, but they often cannot remove specific pieces of information when needed, which makes it hard to handle privacy, bias mitigation, and knowledge correction. Traditional model unlearning approaches require computationally expensive fine-tuning or direct weight editing, making them impractical for real-world deployment. In this work, we introduce LoRA-based Unlearning with Negative Examples (LUNE), a lightweight framework that performs negative-only unlearning by updating only low-rank adapters while freezing the backbone, thereby localizing edits and avoiding disruptive global changes. Leveraging Low-Rank Adaptation (LoRA), LUNE targets intermediate representations to suppress (or replace) requested knowledge with an order-of-magnitude lower compute and memory than full fine-tuning or direct weight editing. Extensive experiments on multiple factual unlearning tasks show that LUNE: (I) achieves effectiveness comparable to full fine-tuning and memory-editing methods, and (II) reduces computational cost by about an order of magnitude.