Unlearning Isn't Invisible: Detecting Unlearning Traces in LLMs from Model Outputs

📅 2025-06-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work reveals that large language models (LLMs) retain detectable “forgetting traces” after machine unlearning, posing novel privacy and copyright risks. To address this, we first identify and systematically validate that unlearning induces a cross-input, low-dimensional, and generalizable manifold structure in intermediate-layer activations—accompanied by distinctive behavioral fingerprints in generated text. Building on this finding, we propose a black-box detection framework requiring neither training data nor model parameters; it integrates supervised classification, nonlinear propagation modeling, and manifold learning to reliably infer unlearning status. Experiments across multiple model scales demonstrate >90% detection accuracy—regardless of whether prompts are semantically related or unrelated to the forgotten content—confirming the robustness and generalizability of forgetting traces. Our results provide both foundational theoretical insights and a practical tool for assessing secure unlearning in LLMs.

Technology Category

Application Category

📝 Abstract
Machine unlearning (MU) for large language models (LLMs), commonly referred to as LLM unlearning, seeks to remove specific undesirable data or knowledge from a trained model, while maintaining its performance on standard tasks. While unlearning plays a vital role in protecting data privacy, enforcing copyright, and mitigating sociotechnical harms in LLMs, we identify a new vulnerability post-unlearning: unlearning trace detection. We discover that unlearning leaves behind persistent ''fingerprints'' in LLMs, detectable traces in both model behavior and internal representations. These traces can be identified from output responses, even when prompted with forget-irrelevant inputs. Specifically, a simple supervised classifier can reliably determine whether a model has undergone unlearning based solely on its textual outputs. Further analysis shows that these traces are embedded in intermediate activations and propagate nonlinearly to the final layer, forming low-dimensional, learnable manifolds in activation space. Through extensive experiments, we show that forget-relevant prompts enable over 90% accuracy in detecting unlearning traces across all model sizes. Even with forget-irrelevant inputs, large LLMs maintain high detectability, demonstrating the broad applicability of unlearning trace detection. These findings reveal that unlearning leaves measurable signatures, introducing a new risk of reverse-engineering forgotten information when a model is identified as unlearned given an input query. Codes are available at [this URL](https://github.com/OPTML-Group/Unlearn-Trace).
Problem

Research questions and friction points this paper is trying to address.

Detecting persistent traces in LLMs post-unlearning
Identifying unlearning fingerprints from model outputs
Reverse-engineering forgotten information via detectable signatures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Detects unlearning traces in LLM outputs
Uses supervised classifier for trace identification
Analyzes activation space manifolds for detection
🔎 Similar Papers
No similar papers found.