Constraining Sequential Model Editing with Editing Anchor Compression

📅 2025-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer severe degradation of general capabilities during sequential knowledge editing due to excessive parameter matrix deviation. Method: This paper proposes the Editing Anchor Compression (EAC) framework, which—first—empirically reveals and quantifies the statistical correlation between edit count and parameter deviation; then selects importance-aware editing anchors and applies low-rank compression encoding to constrain parameter shifts while preserving newly injected knowledge. EAC is plug-and-play compatible with mainstream editing algorithms (e.g., MEMIT, ROME). Contribution/Results: Evaluated on three LLM architectures across four downstream tasks, EAC improves general capability retention to over 70% and achieves higher editing accuracy than baseline methods, significantly mitigating capability collapse induced by sequential editing.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) struggle with hallucinations due to false or outdated knowledge. Given the high resource demands of retraining these models, there is an increasing focus on developing model editing. However, the general abilities of LLMs across downstream tasks are prone to significant degradation during sequential editing. This paper statistically observes that the parameter matrix after editing exhibits a significant deviation compared to its previous state as the number of edits increases. This serious deviation affects the original knowledge associations within LLMs and leads to the degradation of their general abilities. To this end, a framework termed Editing Anchor Compression (EAC) is proposed to constrain the deviation of the parameter matrix during sequential editing. It compresses the editing information by selecting editing anchors that are important in encoding new relations without deviating too much from the original matrix, thereby preserving the general abilities. Experiments of applying EAC to two popular editing methods on three LLMs across four tasks are conducted. Evaluation results show that EAC effectively minimizes unreasonable deviations caused by model editing, preserving over 70% of the general abilities while better retaining the editing knowledge compared to the original counterpart methods.
Problem

Research questions and friction points this paper is trying to address.

Addresses degradation of LLMs' general abilities during sequential editing.
Proposes Editing Anchor Compression to minimize parameter matrix deviation.
Preserves over 70% of general abilities while retaining editing knowledge.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Editing Anchor Compression minimizes parameter deviation
EAC preserves general abilities during sequential edits
Selects anchors to encode new relations effectively
🔎 Similar Papers