KnowledgeSmith: Uncovering Knowledge Updating in LLMs with Model Editing and Unlearning

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically investigates knowledge updating mechanisms in large language models (LLMs), addressing two open questions: (1) whether knowledge exhibits hierarchical editability, and (2) how knowledge editing and machine unlearning behaviors diverge as data scale increases. To this end, we propose KnowledgeSmith—a unified framework that formalizes both tasks as constrained optimization problems—thereby uncovering fundamental trade-offs among knowledge propagation, plasticity, consistency, and capacity. Leveraging automatically generated, multilevel structured intervention data, we conduct large-scale, controlled experiments. Results demonstrate that LLMs lack human-like hierarchical knowledge editability; moreover, improving edit consistency inherently degrades model capacity. This work establishes the first systematic, multi-scale empirical foundation for understanding dynamic knowledge evolution in LLMs, offering novel insights into the intrinsic limitations and design principles of knowledge maintenance in foundation models.

Technology Category

Application Category

📝 Abstract
Knowledge editing and machine unlearning are two popular approaches for large language models (LLMs) to stay up-to-date. However, the knowledge updating mechanism of LLMs remains largely unexplored due to insufficient, isolated, and small-scale evaluation. For instance, are LLMs similar to humans in modifying certain knowledge? What differs editing and unlearning as training data increases? This paper proposes KnowledgeSmith, a unified framework to systematically understand the updating mechanism of LLMs. We first cast editing and unlearning as instances of one constrained optimization problem. Then, we propose an automatic dataset generator that provides structured interventions across multiple graph levels and data scales, enabling controlled studies of how different modification strategies propagate through model knowledge. Extensive experiments demonstrate nuanced insights over knowledge propagation, plasticity scaling, consistency, and robustness. For instance, our results show that LLMs do not exhibit similar updating as humans for different levels of knowledge, and there exists consistency-capacity trade-off. We hope our findings can offer suggestions to the design of more reliable and scalable strategies. Code: https://github.com/AIFrontierLab/KnowledgeSmith.git
Problem

Research questions and friction points this paper is trying to address.

Investigates how LLMs update knowledge through editing and unlearning
Explores knowledge propagation across different graph levels and data scales
Analyzes consistency-capacity trade-offs in model knowledge updating mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified framework for LLM knowledge updating
Automatic dataset generator with structured interventions
Constrained optimization for editing and unlearning
🔎 Similar Papers
No similar papers found.