🤖 AI Summary
This work addresses critical conceptual flaws and insufficient sensitivity in existing evaluation protocols for specificity in large language model editing, which hinder accurate assessment of non-target knowledge retention. The authors propose a novel constructive evaluation protocol that resolves the conflict between open-ended generation and the assumption of deterministic answers while mitigating query-irrelevant fluency biases, thereby enabling more precise specificity measurement. This protocol reveals, for the first time, the weak correlation between current metrics and regularization strength, and introduces a continuously adjustable strictness framework that significantly enhances the discriminative power across different editing methods in terms of knowledge retention. Empirical results demonstrate that the new metric consistently exhibits superior sensitivity and correlation across diverse models, datasets, and editing approaches.
📝 Abstract
Model editing has recently emerged as a popular paradigm for efficiently updating knowledge in LLMs. A central desideratum of updating knowledge is to balance editing efficacy, i.e., the successful injection of target knowledge, and specificity (also known as edit locality), i.e., the preservation of existing non-target knowledge. However, we find that existing specificity evaluation protocols are inadequate for this purpose. We systematically elaborated on the three fundamental issues it faces. Beyond the conceptual issues, we further empirically demonstrate that existing specificity metrics are weakly correlated with the strength of specificity regularizers. We also find that current metrics lack sufficient sensitivity, rendering them ineffective at distinguishing the specificity performance of different methods. Finally, we propose a constructive evaluation protocol. Under this protocol, the conflict between open-ended LLMs and the assumption of determined answers is eliminated, query-independent fluency biases are avoided, and the evaluation strictness can be smoothly adjusted within a near-continuous space. Experiments across various LLMs, datasets, and editing methods show that metrics derived from the proposed protocol are more sensitive to changes in the strength of specificity regularizers and exhibit strong correlation with them, enabling more fine-grained discrimination of different methods'knowledge preservation capabilities.