🤖 AI Summary
To address privacy risks arising from confidential knowledge leakage in large language models (LLMs), this paper proposes a novel selective knowledge removal paradigm grounded in the linear representation hypothesis. Rather than implicit forgetting, our method explicitly redirects neural activations triggered by to-be-forgotten data toward a pre-defined “refusal” representation subspace, causing the model to output “I cannot answer” deterministically. This neural activation redirection mechanism ensures strong forgetting efficacy, controllable inference behavior, sequential forgetting capability, and robustness against white-box attacks. The approach comprises three core components: activation-space remapping, refusal-response triggering, and a multi-objective Deviation Score evaluation framework. On the PISTOL benchmark, our method achieves a 2.9–11.7× improvement in Deviation Score over baselines. It significantly reduces undesirable side effects—such as factual hallucination and coherence degradation—while enhancing response consistency and contextual awareness. Empirical validation across realistic scenarios confirms both its effectiveness and practical applicability.
📝 Abstract
Large Language Models (LLMs) benefit from training on ever larger amounts of textual data, but as a result, they increasingly incur the risk of leaking private information. The ability to selectively remove knowledge from LLMs is, therefore, a highly desirable capability. In this paper, we propose LUNAR, a novel unlearning methodology grounded in the Linear Representation Hypothesis. LUNAR operates by redirecting the representations of unlearned data to regions that trigger the model's inherent ability to express its inability to answer. LUNAR achieves state-of-the-art unlearning performance while significantly enhancing the controllability of the unlearned model during inference. Specifically, LUNAR achieves between 2.9x to 11.7x improvements on combined"unlearning efficacy"and"model utility"score ("Deviation Score") on the PISTOL dataset across various base models. We also demonstrate, through quantitative analysis and qualitative examples, LUNAR's superior controllability in generating coherent and contextually aware responses, mitigating undesired side effects of existing methods. Moreover, we demonstrate that LUNAR is robust against white-box adversarial attacks and versatile in handling real-world scenarios, such as processing sequential unlearning requests.