🤖 AI Summary
This work addresses two critical challenges in large language models (LLMs): the difficulty of achieving precise forgetting of sensitive knowledge and the lack of rigorous, standardized evaluation for existing unlearning methods. To this end, we propose JensUn, a novel unlearning framework that optimizes the Jensen–Shannon divergence jointly over the forget set and retain set during parameter updates—ensuring both thorough knowledge removal and stable model utility. We introduce the LKF benchmark, the first dedicated unlearning evaluation dataset, and pioneer the use of LLMs as semantic discriminators, augmented by diverse adversarial tests, to establish a more reliable, semantics-aware assessment protocol. Experiments demonstrate that JensUn achieves superior trade-offs between forgetting completeness and downstream task performance, while exhibiting strong robustness against benign relearning. Moreover, our evaluation framework exposes significant semantic-level failures of mainstream unlearning approaches, thereby advancing the field toward verifiable, reproducible, and rigorously evaluated unlearning.
📝 Abstract
Unlearning in large language models (LLMs) involves precisely removing specific information from a pre-trained model. This is crucial to ensure safety of LLMs by deleting private data or harmful knowledge acquired during pre-training. However, existing unlearning methods often fall short when subjected to thorough evaluation. To overcome this, we introduce JensUn, where we leverage the Jensen-Shannon Divergence as the training objective for both forget and retain sets for more stable and effective unlearning dynamics compared to commonly used loss functions. In extensive experiments, JensUn achieves better forget-utility trade-off than competing methods, and even demonstrates strong resilience to benign relearning. Additionally, for a precise unlearning evaluation, we introduce LKF, a curated dataset of lesser-known facts that provides a realistic unlearning scenario. Finally, to comprehensively test unlearning methods, we propose (i) employing an LLM as semantic judge instead of the standard ROUGE score, and (ii) using worst-case unlearning evaluation over various paraphrases and input formats. Our improved evaluation framework reveals that many existing methods are less effective than previously thought.