Resolving UnderEdit&OverEdit with Iterative&Neighbor-Assisted Model Editing

📅 2025-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address two fundamental limitations in large language model (LLM) editing—insufficient knowledge update (UnderEdit) and spurious interference with neighboring knowledge (OverEdit)—this paper proposes a synergistic editing framework integrating iterative parameter updates and neighborhood-aware knowledge assistance. Methodologically, it introduces a dual-path mechanism: (i) multi-round gradient reweighting to progressively mitigate UnderEdit, and (ii) knowledge neighborhood modeling coupled with semantic consistency constraints to suppress OverEdit. The framework is architecture-agnostic and seamlessly integrates with mainstream editing algorithms without modifying the base model. Extensive experiments across diverse LLMs (e.g., LLaMA, GPT-J), editing methods (ROME, MEMIT), and benchmark datasets (CounterFact, zsRE) demonstrate substantial improvements: UnderEdit is reduced by up to 38 percentage points, and OverEdit by up to 6 percentage points. These results confirm significant gains in editing accuracy and robustness.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are used in various downstream language tasks, making it crucial to keep their knowledge up-to-date, but both retraining and fine-tuning the model can be costly. Model editing offers an efficient and effective alternative by a single update to only a key subset of model parameters. While being efficient, these methods are not perfect. Sometimes knowledge edits are unsuccessful, i.e., UnderEdit, or the edit contaminated neighboring knowledge that should remain unchanged, i.e., OverEdit. To address these limitations, we propose iterative model editing, based on our hypothesis that a single parameter update is often insufficient, to mitigate UnderEdit, and neighbor-assisted model editing, which incorporates neighboring knowledge during editing to minimize OverEdit. Extensive experiments demonstrate that our methods effectively reduce UnderEdit up to 38 percentage points and OverEdit up to 6 percentage points across multiple model editing algorithms, LLMs, and benchmark datasets.
Problem

Research questions and friction points this paper is trying to address.

Addresses UnderEdit in model editing by iterative updates
Mitigates OverEdit using neighbor-assisted knowledge incorporation
Improves efficiency and accuracy of Large Language Model updates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Iterative model editing reduces UnderEdit.
Neighbor-assisted editing minimizes OverEdit.
Single parameter updates enhance model efficiency.
🔎 Similar Papers
No similar papers found.