Unlearning That Lasts: Utility-Preserving, Robust, and Almost Irreversible Forgetting in LLMs

📅 2025-09-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses two critical challenges in large language models (LLMs): the difficulty of achieving precise forgetting of sensitive knowledge and the lack of rigorous, standardized evaluation for existing unlearning methods. To this end, we propose JensUn, a novel unlearning framework that optimizes the Jensen–Shannon divergence jointly over the forget set and retain set during parameter updates—ensuring both thorough knowledge removal and stable model utility. We introduce the LKF benchmark, the first dedicated unlearning evaluation dataset, and pioneer the use of LLMs as semantic discriminators, augmented by diverse adversarial tests, to establish a more reliable, semantics-aware assessment protocol. Experiments demonstrate that JensUn achieves superior trade-offs between forgetting completeness and downstream task performance, while exhibiting strong robustness against benign relearning. Moreover, our evaluation framework exposes significant semantic-level failures of mainstream unlearning approaches, thereby advancing the field toward verifiable, reproducible, and rigorously evaluated unlearning.

Technology Category

Application Category

📝 Abstract
Unlearning in large language models (LLMs) involves precisely removing specific information from a pre-trained model. This is crucial to ensure safety of LLMs by deleting private data or harmful knowledge acquired during pre-training. However, existing unlearning methods often fall short when subjected to thorough evaluation. To overcome this, we introduce JensUn, where we leverage the Jensen-Shannon Divergence as the training objective for both forget and retain sets for more stable and effective unlearning dynamics compared to commonly used loss functions. In extensive experiments, JensUn achieves better forget-utility trade-off than competing methods, and even demonstrates strong resilience to benign relearning. Additionally, for a precise unlearning evaluation, we introduce LKF, a curated dataset of lesser-known facts that provides a realistic unlearning scenario. Finally, to comprehensively test unlearning methods, we propose (i) employing an LLM as semantic judge instead of the standard ROUGE score, and (ii) using worst-case unlearning evaluation over various paraphrases and input formats. Our improved evaluation framework reveals that many existing methods are less effective than previously thought.
Problem

Research questions and friction points this paper is trying to address.

Removing specific information from pre-trained LLMs
Ensuring safety by deleting private or harmful data
Overcoming limitations of existing unlearning evaluation methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Jensen-Shannon Divergence for stable unlearning dynamics
Introduces LKF dataset for realistic unlearning evaluation
Employs LLM semantic judge instead of ROUGE scores
🔎 Similar Papers
No similar papers found.
N
Naman Deep Singh
University of Tübingen & Tübingen AI Center, Germany
M
Maximilian Müller
University of Tübingen & Tübingen AI Center, Germany
Francesco Croce
Francesco Croce
EPFL
machine learning
Matthias Hein
Matthias Hein
Professor of Computer Science, University of Tübingen
Machine LearningOptimizationStatistics