🤖 AI Summary
This paper addresses the “priming effect” in large language models (LLMs)—a phenomenon wherein knowledge updates via fine-tuning induce harmful cross-context generalization errors and hallucinations due to unintended data leakage. We present the first systematic empirical identification and mechanistic analysis of this effect. To mitigate it, we propose two architecture-agnostic, interpretable knowledge regulation paradigms: (1) *stepwise textual augmentation*, which progressively refines contextual grounding during inference, and (2) *ignore-k gradient pruning*, which selectively suppresses gradient contributions from high-probability tokens prone to overgeneralization. Evaluated on the Outlandish probing dataset and token-level probability analysis across PALM-2, Gemma, and Llama, our methods reduce harmful priming by 50–95% while fully preserving newly acquired knowledge—thereby significantly enhancing the controllability and robustness of knowledge injection into LLMs.
📝 Abstract
Large language models learn and continually learn through the accumulation of gradient-based updates, but how individual pieces of new information affect existing knowledge, leading to both beneficial generalization and problematic hallucination, remains poorly understood. We demonstrate that when learning new information, LLMs exhibit a"priming"effect: learning a new fact can cause the model to inappropriately apply that knowledge in unrelated contexts. To systematically study this phenomenon, we introduce"Outlandish,"a carefully curated dataset of 1320 diverse text samples designed to probe how new knowledge permeates through an LLM's existing knowledge base. Using this dataset, we show that the degree of priming after learning new information can be predicted by measuring the token probability of key words before learning. This relationship holds robustly across different model architectures (PALM-2, Gemma, Llama), sizes, and training stages. Finally, we develop two novel techniques to modulate how new knowledge affects existing model behavior: (1) a ``stepping-stone'' text augmentation strategy and (2) an ``ignore-k'' update pruning method. These approaches reduce undesirable priming effects by 50-95% while preserving the model's ability to learn new information. Our findings provide both empirical insights into how LLMs learn and practical tools for improving the specificity of knowledge insertion in language models. Further materials: https://sunchipsster1.github.io/projects/outlandish/