๐ค AI Summary
Large language models (LLMs) may inadvertently memorize sensitive data, copyrighted material, and harmful knowledge, necessitating precise and efficient knowledge removal under regulatory complianceโe.g., the โright to be forgotten.โ This work systematically reviews over 180 papers on machine unlearning published since 2021. We propose, for the first time, a unified taxonomic framework categorizing methods by training phase: training-time, post-training, and inference-time unlearning. Additionally, we establish a critical evaluation framework encompassing dataset characteristics, standardized metrics, and application scenarios. Our analysis identifies fundamental bottlenecks in generalizability, scalability, and evaluation consistency across existing approaches. We further delineate concrete research directions to advance the field. Collectively, this study provides both theoretical foundations and practical guidelines for developing safe, trustworthy LLMs.
๐ Abstract
The advancement of Large Language Models (LLMs) has revolutionized natural language processing, yet their training on massive corpora poses significant risks, including the memorization of sensitive personal data, copyrighted material, and knowledge that could facilitate malicious activities. To mitigate these issues and align with legal and ethical standards such as the "right to be forgotten", machine unlearning has emerged as a critical technique to selectively erase specific knowledge from LLMs without compromising their overall performance. This survey provides a systematic review of over 180 papers on LLM unlearning published since 2021, focusing exclusively on large-scale generative models. Distinct from prior surveys, we introduce novel taxonomies for both unlearning methods and evaluations. We clearly categorize methods into training-time, post-training, and inference-time based on the training stage at which unlearning is applied. For evaluations, we not only systematically compile existing datasets and metrics but also critically analyze their advantages, disadvantages, and applicability, providing practical guidance to the research community. In addition, we discuss key challenges and promising future research directions. Our comprehensive overview aims to inform and guide the ongoing development of secure and reliable LLMs.