Massive Editing for Large Language Models Based on Dynamic Weight Generation

📅 2025-12-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenge of simultaneously ensuring reliability, generality, and locality in large-scale knowledge editing for large language models (LLMs), this paper proposes MeG: a method that injects a single pluggable dynamic neuron into the target layer and employs a conditional diffusion model to generate its weights on demand—replacing full-parameter fine-tuning. This is the first work to integrate diffusion models into knowledge editing, enabling parameter-efficient weight generation and precise, controllable edits. MeG achieves batched knowledge updates with minimal architectural overhead. It consistently improves all three core metrics—Reliability, Generality, and Locality—with an absolute Locality gain reaching high single-digit percentage points. Extensive experiments across multiple knowledge editing benchmarks demonstrate that MeG significantly outperforms state-of-the-art methods, validating its effectiveness in jointly optimizing edit accuracy, computational efficiency, and edit controllability at scale.

Technology Category

Application Category

📝 Abstract
Knowledge Editing (KE) is a field that studies how to modify some knowledge in Large Language Models (LLMs) at a low cost (compared to pre-training). Currently, performing large-scale edits on LLMs while ensuring the Reliability, Generality, and Locality metrics of the edits remain a challenge. This paper proposes a Massive editing approach for LLMs based on dynamic weight Generation (MeG). Our MeG involves attaching a dynamic weight neuron to specific layers of the LLMs and using a diffusion model to conditionally generate the weights of this neuron based on the input query required for the knowledge. This allows the use of adding a single dynamic weight neuron to achieve the goal of large-scale knowledge editing. Experiments show that our MeG can significantly improve the performance of large-scale KE in terms of Reliability, Generality, and Locality metrics compared to existing knowledge editing methods, particularly with a high percentage point increase in the absolute value index for the Locality metric, demonstrating the advantages of our proposed method.
Problem

Research questions and friction points this paper is trying to address.

Enables large-scale knowledge editing in LLMs efficiently
Improves reliability, generality, and locality of model edits
Uses dynamic weight neurons and diffusion models for edits
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic weight neuron attached to specific layers
Diffusion model generates neuron weights conditionally
Single dynamic neuron enables large-scale knowledge editing
🔎 Similar Papers
No similar papers found.
Wentao Wan
Wentao Wan
Sun Yat-sen University
Artificial IntelligenceCognitive AIDeep LearningNeural-SymbolicQuestion Answering
Q
Qiqing Lao
School of Computer Science and Engineering, Sun Yat-sen University
Z
Zhiwei Xie
School of Computer Science and Engineering, Sun Yat-sen University
Hefeng Wu
Hefeng Wu
Sun Yat-sen University
Computer visionMachine LearningArtificial Intelligence
R
Runnan Lin
School of Computer Science and Engineering, Sun Yat-sen University
Liang Lin
Liang Lin
Fellow of IEEE/IAPR, Professor of Computer Science, Sun Yat-sen University
Embodied AICausal Inference and LearningMultimodal Data Analysis
K
Keze Wang
School of Computer Science and Engineering, Sun Yat-sen University