Representation Shattering in Transformers: A Synthetic Study with Knowledge Editing

📅 2024-10-22
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Knowledge editing (KE) frequently induces “representational fragmentation”—systematic distortion of non-target entity representations—leading to degradation in factual recall and structural reasoning. Method: This work introduces the first mechanistic hypothesis for this phenomenon, designing a structured knowledge-graph-driven synthetic Transformer training task. We conduct ablation-free training from scratch, perform implicit representation extraction, and validate findings across architectures (Llama and Mamba) to precisely attribute KE side effects under controlled conditions. Contribution/Results: Experiments demonstrate that KE not only impairs target knowledge but also distorts semantic representations of associated entities, critically undermining graph-structured reasoning capabilities. This effect is robustly replicated both in synthetic tasks and real pretrained models. The study establishes the first interpretable, diagnosable representational causal analysis framework for KE safety—grounded in explicit representation-level attribution and cross-architectural validation—thereby enabling principled assessment of KE-induced representational harm.

Technology Category

Application Category

📝 Abstract
Knowledge Editing (KE) algorithms alter models' weights to perform targeted updates to incorrect, outdated, or otherwise unwanted factual associations. To better identify the possibilities and limitations of these approaches, recent work has shown that applying KE can adversely affect models' factual recall accuracy and diminish their general reasoning abilities. While these studies give broad insights into the potential harms of KE algorithms, e.g., via performance evaluations on benchmarks, we argue little is understood as to why such destructive failures occur. Is it possible KE methods distort representations of concepts beyond the targeted fact, hence hampering abilities at broad? If so, what is the extent of this distortion? Motivated by such questions, we define a novel synthetic task wherein a Transformer is trained from scratch to internalize a"structured"knowledge graph. The structure enforces relationships between entities of the graph, such that editing a factual association has"trickling effects"on other entities in the graph (e.g., altering X's parent is Y to Z affects who X's siblings' parent is). Through evaluations of edited models and analysis of extracted representations, we show that KE inadvertently affects representations of entities beyond the targeted one, distorting relevant structures that allow a model to infer unseen knowledge about an entity. We call this phenomenon representation shattering and demonstrate that it results in degradation of factual recall and reasoning performance more broadly. To corroborate our findings in a more naturalistic setup, we perform preliminary experiments with pre-trained Llama and Mamba models, reproducing the representation shattering effect therein as well. Overall, our work yields a precise mechanistic hypothesis to explain why KE has adverse effects on model abilities.
Problem

Research questions and friction points this paper is trying to address.

Knowledge Editing
Transformer Models
Representation Fragmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Knowledge Editing
Representation Fragility
Pre-trained Models
🔎 Similar Papers
No similar papers found.
Kento Nishi
Kento Nishi
Harvard University
Machine LearningArtificial IntelligenceComputer Science
M
Maya Okawa
CBS-NTT Program in Physics of Intelligence, Harvard University; Physics and Informatics Lab, NTT Research Inc.
Rahul Ramesh
Rahul Ramesh
Computer and Information Science, University of Pennsylvania
Mikail Khona
Mikail Khona
Research Scientist, NVIDIA
machine learningdeep learningneurosciencephysics
Ekdeep Singh Lubana
Ekdeep Singh Lubana
Goodfire AI
AIMachine LearningDeep Learning
H
Hidenori Tanaka
CBS-NTT Program in Physics of Intelligence, Harvard University; Physics and Informatics Lab, NTT Research Inc.