Related Knowledge Perturbation Matters: Rethinking Multiple Pieces of Knowledge Editing in Same-Subject

📅 2025-02-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the degradation in multi-attribute editing performance for the same subject in knowledge editing, identifying “related-knowledge perturbation”—where subsequent edits undermine prior ones due to excessive reliance on subject representations. To formalize and validate this phenomenon’s prevalence, we introduce the Same-Subject Related Knowledge Editing (S²RKE) benchmark—the first of its kind. Building upon localization-editing paradigms (e.g., ROME, MEMIT), we propose a causal tracing and modular attribution framework that quantifies the negative correlation between subject representation dependency and edit consistency. Empirical results demonstrate that reducing subject representation dependency significantly improves multi-step editing consistency. Our contributions are twofold: (1) establishing S²RKE as a novel evaluation standard for intra-subject knowledge consistency, and (2) proposing “weakening strong subject-binding” as a fundamental design principle for robust knowledge editing.

Technology Category

Application Category

📝 Abstract
Knowledge editing has become a promising approach for efficiently and precisely updating knowledge embedded in large language models (LLMs). In this work, we focus on Same-Subject Editing, which involves modifying multiple attributes of a single entity to ensure comprehensive and consistent updates to entity-centric knowledge. Through preliminary observation, we identify a significant challenge: Current state-of-the-art editing methods struggle when tasked with editing multiple related knowledge pieces for the same subject. To address the lack of relevant editing data for identical subjects in traditional benchmarks, we introduce the $ ext{S}^2 ext{RKE}$(Same-Subject Related Knowledge Editing) benchmark. Our extensive experiments reveal that only mainstream locate-then-edit methods, such as ROME and MEMIT, exhibit"related knowledge perturbation,"where subsequent edits interfere with earlier ones. Further analysis reveals that these methods over-rely on subject information, neglecting other critical factors, resulting in reduced editing effectiveness.
Problem

Research questions and friction points this paper is trying to address.

Challenges in editing multiple related knowledge pieces
Lack of relevant editing data for identical subjects
Over-reliance on subject information reduces effectiveness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Same-Subject Editing technique
Introducing S2RKE benchmark
Analyzing related knowledge perturbation
🔎 Similar Papers
No similar papers found.
Zenghao Duan
Zenghao Duan
CAS Key Laboratory of AI Safety, Institute of Computing Technology, CAS
large language model
W
Wenbin Duan
People’s Public Security University of China, Beijing, China
Z
Zhiyi Yin
Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
Yinghan Shen
Yinghan Shen
Institute of Computing Technology, Chinese Academy of Sciences
Personalized LLMKnowledge graphSocial Computing
S
Shaoling Jing
Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
J
Jie Zhang
Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
H
Huawei Shen
Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
Xueqi Cheng
Xueqi Cheng
Ph.D. student, Florida State University
Data miningLLMGNNComputational social science