EvoEdit: Evolving Null-space Alignment for Robust and Efficient Knowledge Editing

📅 2025-10-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from catastrophic interference in sequential knowledge editing—where each new edit degrades previously injected knowledge and impairs pre-existing capabilities. To address this, we propose Sequential Null-space Alignment (SNA), a theoretically grounded editing method built upon the *locate-then-edit* paradigm. SNA performs parameter updates exclusively within the null space of prior edits’ Jacobians, ensuring that each modification preserves both earlier edits and the original model’s knowledge representations—without full retraining. This yields substantially improved editing stability and computational efficiency. Evaluated on realistic sequential editing benchmarks, SNA matches or surpasses state-of-the-art methods in edit accuracy and generalization, while achieving up to 3.53× inference speedup. By enabling robust, scalable, and continual knowledge updates, SNA provides a reliable foundation for maintaining LLMs’ factual consistency over time.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) require continual updates to rectify outdated or erroneous knowledge. Model editing has emerged as a compelling paradigm for introducing targeted modifications without the computational burden of full retraining. Existing approaches are mainly based on a locate-then-edit framework. However, in sequential editing contexts, where multiple updates are applied over time, they exhibit significant limitations and suffer from catastrophic interference, i.e., new edits compromise previously integrated updates and degrade preserved knowledge. To address these challenges, we introduce EvoEdit, a novel editing strategy that mitigates catastrophic interference through sequential null-space alignment, enabling stable and efficient model editing. By performing sequential null-space alignment for each incoming edit, EvoEdit preserves both original and previously modified knowledge representations and maintains output invariance on preserved knowledge even across long edit sequences, effectively mitigating interference. Evaluations on real-world sequential knowledge-editing benchmarks show that EvoEdit achieves better or comparable performance than prior state-of-the-art locate-then-edit techniques, with up to 3.53 times speedup. Overall, these results underscore the necessity of developing more principled approaches for designing LLMs in dynamically evolving information settings, while providing a simple yet effective solution with strong theoretical guarantees.
Problem

Research questions and friction points this paper is trying to address.

Addressing catastrophic interference in sequential model editing
Preserving original and modified knowledge during updates
Enabling stable knowledge editing without full retraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sequential null-space alignment for robust editing
Preserves original and modified knowledge representations
Achieves speedup over prior locate-then-edit techniques
🔎 Similar Papers
S
Sicheng Lyu
McGill University
Y
Yu Gu
McGill University
X
Xinyu Wang
McGill University
J
Jerry Huang
Mila—Quebec AI Institute
Sitao Luan
Sitao Luan
University of Montreal, Mila
Graph LearningAI4ScienceGraph for LLMLLM for GraphRL Reasoning
Yufei Cui
Yufei Cui
McGill University, MILA
Medical AIRAGLLM AgentPredictive Uncertainty
X
Xiao-Wen Chang
McGill University
P
Peng Lu
Université de Montréal