🤖 AI Summary
This work addresses *true knowledge removal*—not obfuscation—in large language models (LLMs) to meet data privacy, regulatory compliance, and ethical deployment requirements. We propose the first formal theoretical framework for *unlearning*, rigorously distinguishing it from obfuscation. Our DF-MCQ method jointly achieves knowledge erasure and refusal behavior by minimizing KL divergence between the model’s prediction distribution on self-generated multiple-choice questions and a uniform (flat) distribution. We further introduce a probe-based quantitative evaluation suite to measure unlearning efficacy. Experiments demonstrate that after unlearning, the model refuses target-knowledge queries with >90% probability, while its predictions on probe questions exhibit uncertainty indistinguishable from random guessing—significantly outperforming state-of-the-art obfuscation methods.
📝 Abstract
Unlearning has emerged as a critical capability for large language models (LLMs) to support data privacy, regulatory compliance, and ethical AI deployment. Recent techniques often rely on obfuscation by injecting incorrect or irrelevant information to suppress knowledge. Such methods effectively constitute knowledge addition rather than true removal, often leaving models vulnerable to probing. In this paper, we formally distinguish unlearning from obfuscation and introduce a probing-based evaluation framework to assess whether existing approaches genuinely remove targeted information. Moreover, we propose DF-MCQ, a novel unlearning method that flattens the model predictive distribution over automatically generated multiple-choice questions using KL-divergence, effectively removing knowledge about target individuals and triggering appropriate refusal behaviour. Experimental results demonstrate that DF-MCQ achieves unlearning with over 90% refusal rate and a random choice-level uncertainty that is much higher than obfuscation on probing questions.