Wisdom is Knowing What not to Say: Hallucination-Free LLMs Unlearning via Attention Shifting

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) face a selective forgetting dilemma in knowledge-intensive applications: aggressive forgetting impairs utility, while conservative strategies risk hallucination. To address this, we propose an attention transfer framework that performs context-protective intervention at the attention layer via importance-aware attention suppression and guided retention enhancement, establishing a soft forgetting boundary to enable targeted unlearning of sensitive knowledge while jointly preserving linguistic structure and functional capabilities. Our method employs a dual-objective loss function to jointly optimize forgetting accuracy and hallucination resistance within the representation superposition space. Evaluated on the ToFU and TDEC benchmarks, our approach achieves 15% and 10% accuracy improvements, respectively—outperforming prior methods significantly—and attains state-of-the-art performance in hallucination-free forgetting.

Technology Category

Application Category

📝 Abstract
The increase in computing power and the necessity of AI-assisted decision-making boost the growing application of large language models (LLMs). Along with this, the potential retention of sensitive data of LLMs has spurred increasing research into machine unlearning. However, existing unlearning approaches face a critical dilemma: Aggressive unlearning compromises model utility, while conservative strategies preserve utility but risk hallucinated responses. This significantly limits LLMs' reliability in knowledge-intensive applications. To address this, we introduce a novel Attention-Shifting (AS) framework for selective unlearning. AS is driven by two design objectives: (1) context-preserving suppression that attenuates attention to fact-bearing tokens without disrupting LLMs' linguistic structure; and (2) hallucination-resistant response shaping that discourages fabricated completions when queried about unlearning content. AS realizes these objectives through two attention-level interventions, which are importance-aware suppression applied to the unlearning set to reduce reliance on memorized knowledge and attention-guided retention enhancement that reinforces attention toward semantically essential tokens in the retained dataset to mitigate unintended degradation. These two components are jointly optimized via a dual-loss objective, which forms a soft boundary that localizes unlearning while preserving unrelated knowledge under representation superposition. Experimental results show that AS improves performance preservation over the state-of-the-art unlearning methods, achieving up to 15% higher accuracy on the ToFU benchmark and 10% on the TDEC benchmark, while maintaining competitive hallucination-free unlearning effectiveness. Compared to existing methods, AS demonstrates a superior balance between unlearning effectiveness, generalization, and response reliability.
Problem

Research questions and friction points this paper is trying to address.

Addresses hallucination risks in LLMs during machine unlearning processes
Balances knowledge removal with utility preservation in language models
Prevents fabricated responses while maintaining model performance accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Attention-shifting framework for selective unlearning
Context-preserving suppression of fact-bearing tokens
Hallucination-resistant response shaping via dual-loss optimization
🔎 Similar Papers
No similar papers found.