QueueEDIT: Structural Self-Correction for Sequential Model Editing in LLMs

📅 2025-06-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address knowledge conflicts, capability degradation, and difficulties in modeling long-range dependencies during sequential model editing (SME) of large language models (LLMs), this paper proposes a self-correcting editing framework based on a parameter queue. The method precisely identifies and updates sensitive parameters by localizing knowledge neurons within Transformer layers and optimizing via a structured triplet-mapping loss. A dynamic parameter queue is introduced to store, align, and selectively recalibrate previously edited parameters, while freezing unrelated parameters to preserve general-purpose capabilities. Compared to existing approaches, the framework significantly improves editing accuracy and consistency across multi-step sequential editing tasks, maintains competitive performance in single-step editing, and sustains high performance on standard general-purpose NLP benchmarks throughout the editing process.

Technology Category

Application Category

📝 Abstract
Recently, large language models (LLMs) have demonstrated impressive results but still suffer from hallucinations. Model editing has been proposed to correct factual inaccuracies in LLMs. A challenging case is sequential model editing (SME), which aims to rectify errors continuously rather than treating them as a one-time task. During SME, the general capabilities of LLMs can be negatively affected due to the introduction of new parameters. In this paper, we propose a queue-based self-correction framework (QueueEDIT) that not only enhances SME performance by addressing long-sequence dependency but also mitigates the impact of parameter bias on the general capabilities of LLMs. Specifically, we first introduce a structural mapping editing loss to map the triplets to the knowledge-sensitive neurons within the Transformer layers of LLMs. We then store the located parameters for each piece of edited knowledge in a queue and dynamically align previously edited parameters. In each edit, we select queue parameters most relevant to the currently located parameters to determine whether previous knowledge needs realignment. Irrelevant parameters in the queue are frozen, and we update the parameters at the queue head to the LLM to ensure they do not harm general abilities. Experiments show that our framework significantly outperforms strong baselines across various SME settings and maintains competitiveness in single-turn editing. The resulting LLMs also preserve high capabilities in general NLP tasks throughout the SME process.
Problem

Research questions and friction points this paper is trying to address.

Corrects factual errors in LLMs continuously via sequential editing
Reduces negative impact of new parameters on LLM general capabilities
Enhances long-sequence dependency handling during model editing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Queue-based self-correction framework for LLMs
Structural mapping editing loss for knowledge-sensitive neurons
Dynamic parameter alignment and freezing irrelevant parameters
🔎 Similar Papers
No similar papers found.