🤖 AI Summary
This work addresses the challenge that existing model editing methods struggle to effectively maintain consistency of rule-level knowledge across diverse representational forms. It reveals, for the first time, that rule knowledge in Transformers is stored in a hierarchical and heterogeneous manner: formal expressions and textual descriptions are concentrated in early layers, while rule instances are distributed across intermediate layers. Building on this insight, the authors propose a discontinuous, multi-layer collaborative editing strategy that uniformly updates rule formulations in early layers and edits individual instances separately in intermediate layers. Leveraging fine-grained causal tracing and multi-layer parameter updates, the method is evaluated on a newly constructed RuleEdit benchmark comprising 200 human-verified rules. Experiments on models such as GPT-J-6B demonstrate substantial improvements, increasing instance transferability and rule comprehension by 13.91 and 50.19 percentage points, respectively, significantly outperforming current baselines.
📝 Abstract
Large language models store not only isolated facts but also rules that support reasoning across symbolic expressions, natural language explanations, and concrete instances. Yet most model editing methods are built for fact-level knowledge, assuming that a target edit can be achieved through a localized intervention. This assumption does not hold for rule-level knowledge, where a single rule must remain consistent across multiple interdependent forms. We investigate this problem through a mechanistic study of rule-level knowledge editing. To support this study, we extend the RuleEdit benchmark from 80 to 200 manually verified rules spanning mathematics and physics. Fine-grained causal tracing reveals a form-specific organization of rule knowledge in transformer layers: formulas and descriptions are concentrated in earlier layers, while instances are more associated with middle layers. These results suggest that rule knowledge is not uniformly localized, and therefore cannot be reliably edited by a single-layer or contiguous-block intervention. Based on this insight, we propose Distributed Multi-Layer Editing (DMLE), which applies a shared early-layer update to formulas and descriptions and a separate middle-layer update to instances. While remaining competitive on standard editing metrics, DMLE achieves substantially stronger rule-level editing performance. On average, it improves instance portability and rule understanding by 13.91 and 50.19 percentage points, respectively, over the strongest baseline across GPT-J-6B, Qwen2.5-7B, Qwen2-7B, and LLaMA-3-8B. The code is available at https://github.com/Pepper66/DMLE.