SoK: Machine Unlearning for Large Language Models

📅 2025-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing machine unlearning research for large language models (LLMs) overlooks the fundamental distinction between *intent to forget*—i.e., genuine knowledge deletion versus behavioral suppression—leading to conceptual ambiguity and misaligned evaluation. Method: We propose the first intent-oriented taxonomy, systematically analyzing gradient ascent, model editing, and hidden-layer redirection techniques; critically assessing prevailing evaluation metrics; and modeling scalability and sequential forgetting bottlenecks. Contribution/Results: We demonstrate that most current methods implement behavioral suppression rather than true knowledge erasure; expose a critical misalignment between standard evaluation protocols and forgetting intent; introduce an evaluation framework explicitly aligned with intent; and establish a verification pathway grounded in privacy compliance and practical deployment. This work clarifies the ontological boundary of machine unlearning, advancing it from superficial behavioral inhibition toward controllable, verifiable, and scalable *actual knowledge removal*.

Technology Category

Application Category

📝 Abstract
Large language model (LLM) unlearning has become a critical topic in machine learning, aiming to eliminate the influence of specific training data or knowledge without retraining the model from scratch. A variety of techniques have been proposed, including Gradient Ascent, model editing, and re-steering hidden representations. While existing surveys often organize these methods by their technical characteristics, such classifications tend to overlook a more fundamental dimension: the underlying intention of unlearning--whether it seeks to truly remove internal knowledge or merely suppress its behavioral effects. In this SoK paper, we propose a new taxonomy based on this intention-oriented perspective. Building on this taxonomy, we make three key contributions. First, we revisit recent findings suggesting that many removal methods may functionally behave like suppression, and explore whether true removal is necessary or achievable. Second, we survey existing evaluation strategies, identify limitations in current metrics and benchmarks, and suggest directions for developing more reliable and intention-aligned evaluations. Third, we highlight practical challenges--such as scalability and support for sequential unlearning--that currently hinder the broader deployment of unlearning methods. In summary, this work offers a comprehensive framework for understanding and advancing unlearning in generative AI, aiming to support future research and guide policy decisions around data removal and privacy.
Problem

Research questions and friction points this paper is trying to address.

Classify machine unlearning methods by intention, not just technical traits
Assess if true knowledge removal is feasible or needed in LLMs
Improve evaluation metrics for unlearning effectiveness and scalability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gradient Ascent for targeted data influence removal
Model editing to alter specific knowledge traces
Re-steering hidden representations for behavior suppression
🔎 Similar Papers
No similar papers found.
J
Jie Ren
Michigan State University
Y
Yue Xing
Michigan State University
Yingqian Cui
Yingqian Cui
Michigan State University
Trustworthy AI
C
Charu C. Aggarwal
IBM T. J. Watson Research Center
H
Hui Liu
Michigan State University