Does Machine Unlearning Truly Remove Model Knowledge? A Framework for Auditing Unlearning in LLMs

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Verifying the effectiveness of machine unlearning in large language models (LLMs) remains fundamentally challenging due to the lack of systematic, auditable evaluation frameworks. Method: We propose the first comprehensive auditing framework for LLM unlearning, introducing a novel activation-perturbation-based auditing technique that probes intermediate-layer representations—moving beyond conventional input-output prompt-based testing. We further establish a standardized benchmark comprising three datasets, six unlearning algorithms, and five auditing methods. Results: Empirical evaluation reveals pervasive latent knowledge retention across state-of-the-art unlearning methods. Our activation-perturbation approach achieves significantly higher sensitivity in detecting residual knowledge than prevailing prompt-based auditing techniques—yielding an average improvement of 27.4%. The framework provides a reproducible, interpretable, and high-sensitivity quantitative standard for regulatory compliance assessment of machine unlearning.

Technology Category

Application Category

📝 Abstract
In recent years, Large Language Models (LLMs) have achieved remarkable advancements, drawing significant attention from the research community. Their capabilities are largely attributed to large-scale architectures, which require extensive training on massive datasets. However, such datasets often contain sensitive or copyrighted content sourced from the public internet, raising concerns about data privacy and ownership. Regulatory frameworks, such as the General Data Protection Regulation (GDPR), grant individuals the right to request the removal of such sensitive information. This has motivated the development of machine unlearning algorithms that aim to remove specific knowledge from models without the need for costly retraining. Despite these advancements, evaluating the efficacy of unlearning algorithms remains a challenge due to the inherent complexity and generative nature of LLMs. In this work, we introduce a comprehensive auditing framework for unlearning evaluation, comprising three benchmark datasets, six unlearning algorithms, and five prompt-based auditing methods. By using various auditing algorithms, we evaluate the effectiveness and robustness of different unlearning strategies. To explore alternatives beyond prompt-based auditing, we propose a novel technique that leverages intermediate activation perturbations, addressing the limitations of auditing methods that rely solely on model inputs and outputs.
Problem

Research questions and friction points this paper is trying to address.

Evaluating effectiveness of machine unlearning in LLMs
Developing auditing framework for unlearning algorithms
Addressing limitations of prompt-based unlearning auditing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Comprehensive auditing framework for unlearning evaluation
Leverages intermediate activation perturbations technique
Evaluates effectiveness of six unlearning algorithms
🔎 Similar Papers
No similar papers found.