🤖 AI Summary
Large language models (LLMs) suffer significant performance degradation during large-scale knowledge editing—particularly when updating thousands of facts—primarily due to embedding-space collisions among knowledge items. To address this, we propose Noise-Aware Memory Optimization (NAMO), the first method to inject controllable noise into the memory retrieval phase of MEMIT, leveraging the inherent key-value separation in Transformer architectures to achieve collision-resilient editing. NAMO requires only a single-line code modification for integration and substantially improves editing robustness. Extensive experiments across six mainstream LLMs and three benchmark datasets demonstrate that, when editing thousands of factual statements, NAMO achieves an average 12.7% improvement in factual accuracy and maintains 91.4% contextual consistency. This work establishes a new paradigm for efficient, scalable, and reliable knowledge updating in LLMs.
📝 Abstract
Model editing techniques are essential for efficiently updating knowledge in large language models (LLMs). However, the effectiveness of existing approaches degrades in massive editing scenarios, particularly when evaluated with practical metrics or in context-rich settings. We attribute these failures to embedding collisions among knowledge items, which undermine editing reliability at scale. To address this, we propose NAMET (Noise-aware Model Editing in Transformers), a simple yet effective method that introduces noise during memory extraction via a one-line modification to MEMIT. Extensive experiments across six LLMs and three datasets demonstrate that NAMET consistently outperforms existing methods when editing thousands of facts.