🤖 AI Summary
Large language models (LLMs) face a critical challenge in selectively forgetting specific knowledge—such as copyrighted content or sensitive entities—to ensure safety and regulatory compliance; existing fine-tuning–based unlearning methods often blur the boundary between forgotten and retained knowledge, degrading general-purpose capabilities. This paper proposes a **fine-tuning–free, inference-time dynamic unlearning framework** that adaptively suppresses memory leakage during generation. The framework integrates prompt classification, forbidden-term extraction, dynamic token-level penalty, and semantic matching–based filtering to enable fine-grained, high-precision intervention at decoding time. Evaluated on the Harry Potter, MUSE, and TOFU benchmarks, it achieves substantial improvements in unlearning fidelity while preserving near-original generalization performance. To our knowledge, this is the first approach to achieve an optimal trade-off between unlearning effectiveness and model utility without any parameter modification.
📝 Abstract
Large Language Models (LLMs) have demonstrated strong capabilities in memorizing vast amounts of knowledge across diverse domains. However, the ability to selectively forget specific knowledge is critical for ensuring the safety and compliance of deployed models. Existing unlearning efforts typically fine-tune the model with resources such as forget data, retain data, and a calibration model. These additional gradient steps blur the decision boundary between forget and retain knowledge, making unlearning often at the expense of overall performance. To avoid the negative impact of fine-tuning, it would be better to unlearn solely at inference time by safely guarding the model against generating responses related to the forget target, without destroying the fluency of text generation. In this work, we propose Generation-time Unlearning via Adaptive Restriction and Detection (GUARD), a framework that enables dynamic unlearning during LLM generation. Specifically, we first employ a prompt classifier to detect unlearning targets and extract the corresponding forbidden token. We then dynamically penalize and filter candidate tokens during generation using a combination of token matching and semantic matching, effectively preventing the model from leaking the forgotten content. Experimental results on copyright content unlearning tasks over the Harry Potter dataset and the MUSE benchmark, as well as entity unlearning tasks on the TOFU dataset, demonstrate that GUARD achieves strong forget quality across various tasks while causing almost no degradation to the LLM's general capabilities, striking an excellent trade-off between forgetting and utility.