Learn while Unlearn: An Iterative Unlearning Framework for Generative Language Models

📅 2024-07-25
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of enabling generative language models to selectively forget sensitive data under privacy regulations such as GDPR—without access to the original training data. To this end, we propose Iterative Contrastive Unlearning (ICU), the first framework that integrates knowledge-guided unlearning, contrastive learning augmentation, and dynamic evaluation optimization. ICU employs a lightweight iterative fine-tuning paradigm, jointly optimizing a knowledge-unlearning loss and a contrastive objective to balance forgetting efficacy and model utility. Crucially, it operates in a data-free setting. Extensive experiments across multiple benchmarks demonstrate that ICU achieves over 98% removal of sensitive information while preserving more than 95% of the original language modeling performance—substantially outperforming state-of-the-art unlearning methods. ICU thus provides a scalable, data-free, and regulation-compliant solution for privacy-preserving large language model deployment.

Technology Category

Application Category

📝 Abstract
Recent advances in machine learning, particularly in Natural Language Processing (NLP), have produced powerful models trained on vast datasets. However, these models risk leaking sensitive information, raising privacy concerns. In response, regulatory measures such as the European Union's General Data Protection Regulation (GDPR) have driven increasing interest in Machine Unlearning techniques, which enable models to selectively forget specific data entries. Early unlearning approaches primarily relied on pre-processing methods, while more recent research has shifted towards training-based solutions. Despite their effectiveness, a key limitation persists: most methods require access to original training data, which is often unavailable. Additionally, directly applying unlearning techniques bears the cost of undermining the model's expressive capabilities. To address these challenges, we introduce the Iterative Contrastive Unlearning (ICU) framework, which consists of three core components: A Knowledge Unlearning Induction module designed to target specific knowledge for removal using an unlearning loss; A Contrastive Learning Enhancement module to preserve the model's expressive capabilities against the pure unlearning goal; And an Iterative Unlearning Refinement module that dynamically adjusts the unlearning process through ongoing evaluation and updates. Experimental results demonstrate the efficacy of our ICU method in unlearning sensitive information while maintaining the model's overall performance, offering a promising solution for privacy-conscious machine learning applications.
Problem

Research questions and friction points this paper is trying to address.

Address privacy concerns in NLP models
Enable selective data forgetting efficiently
Maintain model performance post-unlearning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Iterative Contrastive Unlearning framework
Knowledge Unlearning Induction module
Contrastive Learning Enhancement module
🔎 Similar Papers
No similar papers found.
H
Haoyu Tang
State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China
Y
Ye Liu
State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China
Xukai Liu
Xukai Liu
University of Science and Technology of China
Knowledge GraphNatural Language Processing
K
Kai Zhang
State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China
Y
Yanghai Zhang
State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China
Q
Qi Liu
State Key Laboratory of Cognitive Intelligence, University of Science and Technology of China
Enhong Chen
Enhong Chen
University of Science and Technology of China
data miningrecommender systemmachine learning