DUET: Distilled LLM Unlearning from an Efficiently Contextualized Teacher

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of unlearning harmful knowledge in large language models, which often involve high computational costs, catastrophic forgetting, or vulnerability to prompt-based attacks. The authors propose a distillation-based unlearning approach that guides a student model to mimic the behavior of a teacher model whose responses are modulated via contextual prompting. This method effectively rejects harmful content while preserving general capabilities, synergistically combining the strengths of both tuning-based and context-based unlearning paradigms. It achieves high efficiency, robustness, and data economy. Experimental results demonstrate that the proposed method significantly outperforms existing techniques across multiple benchmarks, exhibiting superior performance in unlearning efficacy, utility retention, and data efficiency—improving data efficiency by several orders of magnitude.

Technology Category

Application Category

📝 Abstract
LLM unlearning is a technique to remove the impacts of undesirable knowledge from the model without retraining from scratch, which is indispensable towards trustworthy AI. Existing unlearning methods face significant limitations: conventional tuning-based unlearning is computationally heavy and prone to catastrophic forgetting. In contrast, in-contextualized unlearning is lightweight for precise unlearning but vulnerable to prompt removal or reverse engineering attacks. In response, we propose Distilled Unlearning from an Efficient Teacher (DUET), a novel distillation-based unlearning method that combines the merits of these two lines of work. It learns a student model to imitate the behavior of a prompt-steered teacher that effectively refuses undesirable knowledge generation while preserving general domain knowledge. Extensive evaluations on existing benchmarks with our enriched evaluation protocols demonstrate that DUET achieves higher performance in both forgetting and utility preservation, while being orders of magnitude more data-efficient than state-of-the-art unlearning methods.
Problem

Research questions and friction points this paper is trying to address.

LLM unlearning
catastrophic forgetting
in-context learning
trustworthy AI
knowledge removal
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM unlearning
knowledge distillation
contextualized teacher
data efficiency
catastrophic forgetting
🔎 Similar Papers
2024-02-01arXiv.orgCitations: 19
2024-09-03arXiv.orgCitations: 1