STEAM: A Semantic-Level Knowledge Editing Framework for Large Language Models

📅 2025-10-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing knowledge editing methods focus on token-level likelihood optimization, causing updated knowledge to reside in the latent space as isolated residual components—thereby compromising semantic coherence and disrupting natural reasoning pathways. To address this, we propose a semantic-level knowledge editing framework that introduces the novel concept of “semantic anchors.” Our approach integrates latent-space alignment loss, residual flow analysis, and contrastive learning to achieve deep integration of newly injected knowledge with the model’s preexisting knowledge structure—without requiring full model retraining. Crucially, edited knowledge is embedded directly into the model’s intrinsic reasoning paths, significantly improving reasoning consistency and semantic plausibility. Extensive experiments demonstrate that our method consistently outperforms state-of-the-art baselines across multiple knowledge editing benchmarks, with particularly pronounced gains in complex multi-step reasoning and zero-shot generalization scenarios.

Technology Category

Application Category

📝 Abstract
Large Language Models store extensive factual knowledge acquired during large-scale pre-training. However, this knowledge is inherently static, reflecting only the state of the world at the time of training. Knowledge editing has emerged as a promising solution for updating outdated or incorrect facts without full retraining. However, most existing locate-and-edit methods primarily focus on token-level likelihood optimization without addressing semantic coherence. Our analysis reveals that such edited knowledge is often encoded as isolated residual streams in the model's latent space, distinct from pre-existing knowledge and bypassing natural reasoning process. To address this, we propose extsc{Steam}, a semantic-level knowledge editing framework that enhances integration of updated knowledge into the model's knowledge structure. extsc{Steam} first identifies target representations as semantic anchors for the updated factual association, then guides the internal representation of the edited fact towards these anchors through an alignment loss during optimization. Experimental results demonstrate that extsc{Steam} improves model's ability to reason with edited knowledge and enhances semantic coherence, underscoring the importance of latent-space alignment for reliable and coherent knowledge editing. The code is available at https://github.com/GY-Jeong/STEAM.
Problem

Research questions and friction points this paper is trying to address.

Updating outdated knowledge in LLMs without full retraining
Addressing semantic incoherence in existing knowledge editing methods
Integrating edited knowledge into model's reasoning process naturally
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semantic-level knowledge editing framework for LLMs
Uses semantic anchors for factual association alignment
Optimizes latent-space alignment to enhance reasoning coherence
🔎 Similar Papers
No similar papers found.
G
Geunyeong Jeong
Konkuk University
J
Juoh Sun
Konkuk University
S
Seonghee Lee
Konkuk University
Harksoo Kim
Harksoo Kim
Professor of Computer Science and Engineering, Konkuk University
Natural Language ProcessingQuesion-AnswetingRelation ExtractionDialogue System