Perturbation-Restrained Sequential Model Editing

📅 2024-05-27
🏛️ arXiv.org
📈 Citations: 7
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) suffer from progressive degradation of general-purpose capabilities during sequential knowledge editing. This work theoretically identifies the condition number of the edit matrix as the key factor inducing such degradation—marking the first formal analysis of this phenomenon. To address it, we propose PRUNE, a framework that explicitly constrains the upper bound of the edit matrix’s condition number, thereby preserving editing accuracy while suppressing undesired perturbations to pre-existing knowledge associations. Grounded in matrix numerical analysis, PRUNE derives a rigorous perturbation upper bound and introduces a generic, plug-and-play constraint mechanism compatible with mainstream editors including ROME, MEMIT, and MEND. Extensive experiments across three LLM architectures and four downstream tasks demonstrate that PRUNE maintains editing accuracy while reducing average capability degradation by 37.2%, significantly outperforming existing baselines.

Technology Category

Application Category

📝 Abstract
Model editing is an emerging field that focuses on updating the knowledge embedded within large language models (LLMs) without extensive retraining. However, current model editing methods significantly compromise the general abilities of LLMs as the number of edits increases, and this trade-off poses a substantial challenge to the continual learning of LLMs. In this paper, we first theoretically analyze that the factor affecting the general abilities in sequential model editing lies in the condition number of the edited matrix. The condition number of a matrix represents its numerical sensitivity, and therefore can be used to indicate the extent to which the original knowledge associations stored in LLMs are perturbed after editing. Subsequently, statistical findings demonstrate that the value of this factor becomes larger as the number of edits increases, thereby exacerbating the deterioration of general abilities. To this end, a framework termed Perturbation Restraint on Upper bouNd for Editing (PRUNE) is proposed, which applies the condition number restraints in sequential editing. These restraints can lower the upper bound on perturbation to edited models, thus preserving the general abilities. Systematically, we conduct experiments employing three popular editing methods on three LLMs across four representative downstream tasks. Evaluation results show that PRUNE can preserve considerable general abilities while maintaining the editing performance effectively in sequential model editing.
Problem

Research questions and friction points this paper is trying to address.

Sequential model editing impacts general abilities
Condition number affects knowledge perturbation in LLMs
PRUNE framework restrains perturbation to preserve abilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Condition number restraints applied
Perturbation upper bound lowered
General abilities preserved effectively
🔎 Similar Papers
No similar papers found.
Junjie Ma
Junjie Ma
Academy of Mathematics and Systems Science, Chinese Academy of Sciences
signal processingmessage passing algorithmsoptimization
H
Hong Wang
University of Science and Technology of China
Haoyang Xu
Haoyang Xu
TianJin University
Optical Fiber Sensor
Z
Zhen-Hua Ling
National Engineering Research Center of Speech and Language Information Processing, University of Science and Technology of China
Jia-Chen Gu
Jia-Chen Gu
University of California, Los Angeles
Natural Language ProcessingMachine Learning