The Unlearning Mirage: A Dynamic Framework for Evaluating LLM Unlearning

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current unlearning methods for large language models exhibit fragility under minor query perturbations, such as multi-hop reasoning or entity aliasing, yet static evaluation benchmarks often fail to capture these failure modes. This work proposes a dynamic evaluation framework that leverages the model’s own knowledge to automatically generate structured, semantically equivalent probes with controllable difficulty, forming query sequences ranging from single-hop to multi-hop. By integrating activation path analysis with an automated evaluation pipeline, the framework enables scalable stress testing of unlearning efficacy. It matches existing benchmarks in coverage, reproduces prior findings, and—critically—uncovers previously overlooked unlearning failures in multi-hop scenarios, thereby exposing robustness deficiencies in current approaches.

Technology Category

Application Category

📝 Abstract
Unlearning in Large Language Models (LLMs) aims to enhance safety, mitigate biases, and comply with legal mandates, such as the right to be forgotten. However, existing unlearning methods are brittle: minor query modifications, such as multi-hop reasoning and entity aliasing, can recover supposedly forgotten information. As a result, current evaluation metrics often create an illusion of effectiveness, failing to detect these vulnerabilities due to reliance on static, unstructured benchmarks. We propose a dynamic framework that stress tests unlearning robustness using complex structured queries. Our approach first elicits knowledge from the target model (pre-unlearning) and constructs targeted probes, ranging from simple queries to multi-hop chains, allowing precise control over query difficulty. Our experiments show that the framework (1) shows comparable coverage to existing benchmarks by automatically generating semantically equivalent Q&A probes, (2) aligns with prior evaluations, and (3) uncovers new unlearning failures missed by other benchmarks, particularly in multi-hop settings. Furthermore, activation analyses show that single-hop queries typically follow dominant computation pathways, which are more likely to be disrupted by unlearning methods. In contrast, multi-hop queries tend to use alternative pathways that often remain intact, explaining the brittleness of unlearning techniques in multi-hop settings. Our framework enables practical and scalable evaluation of unlearning methods without the need for manual construction of forget test sets, enabling easier adoption for real-world applications. We release the pip package and the code at https://sites.google.com/view/unlearningmirage/home.
Problem

Research questions and friction points this paper is trying to address.

unlearning
large language models
evaluation framework
multi-hop reasoning
forgetting robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM unlearning
dynamic evaluation framework
multi-hop reasoning
structured probing
forgetting robustness
🔎 Similar Papers
No similar papers found.