🤖 AI Summary
Large language models (LLMs) suffer from catastrophic interference in sequential knowledge editing—where each new edit degrades previously injected knowledge and impairs pre-existing capabilities. To address this, we propose Sequential Null-space Alignment (SNA), a theoretically grounded editing method built upon the *locate-then-edit* paradigm. SNA performs parameter updates exclusively within the null space of prior edits’ Jacobians, ensuring that each modification preserves both earlier edits and the original model’s knowledge representations—without full retraining. This yields substantially improved editing stability and computational efficiency. Evaluated on realistic sequential editing benchmarks, SNA matches or surpasses state-of-the-art methods in edit accuracy and generalization, while achieving up to 3.53× inference speedup. By enabling robust, scalable, and continual knowledge updates, SNA provides a reliable foundation for maintaining LLMs’ factual consistency over time.
📝 Abstract
Large language models (LLMs) require continual updates to rectify outdated or erroneous knowledge. Model editing has emerged as a compelling paradigm for introducing targeted modifications without the computational burden of full retraining. Existing approaches are mainly based on a locate-then-edit framework. However, in sequential editing contexts, where multiple updates are applied over time, they exhibit significant limitations and suffer from catastrophic interference, i.e., new edits compromise previously integrated updates and degrade preserved knowledge. To address these challenges, we introduce EvoEdit, a novel editing strategy that mitigates catastrophic interference through sequential null-space alignment, enabling stable and efficient model editing. By performing sequential null-space alignment for each incoming edit, EvoEdit preserves both original and previously modified knowledge representations and maintains output invariance on preserved knowledge even across long edit sequences, effectively mitigating interference. Evaluations on real-world sequential knowledge-editing benchmarks show that EvoEdit achieves better or comparable performance than prior state-of-the-art locate-then-edit techniques, with up to 3.53 times speedup. Overall, these results underscore the necessity of developing more principled approaches for designing LLMs in dynamically evolving information settings, while providing a simple yet effective solution with strong theoretical guarantees.