Beyond Memorization: A Rigorous Evaluation Framework for Medical Knowledge Editing

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Medical knowledge editing (KE) faces challenges including insufficient knowledge internalization, poor generalization, and opaque decision-making, with existing methods lacking systematic clinical validation. To address this, we introduce MedEditBench—the first comprehensive KE benchmark for the medical domain—revealing that mainstream KE methods achieve only shallow factual memorization without reasoning transfer capability. We propose Self-Generated Rationale Editing (SGR-Edit), a novel paradigm that edits models toward self-generated reasoning chains rather than raw facts, enabling a paradigm shift from memorization to reasoning. We further uncover the localization patterns of medical knowledge in LLMs and characterize its continuous evolution under sequential editing. Integrating fact injection, rationale chain editing, and context augmentation—guided by knowledge localization analysis and clinical-logic-driven generalization evaluation—SGR-Edit achieves an average 32.7% improvement in cross-scenario generalization accuracy, significantly enhancing clinical adaptability and decision interpretability.

Technology Category

Application Category

📝 Abstract
Recently, knowledge editing (KE) has emerged as a promising approach to update specific facts in Large Language Models (LLMs) without the need for full retraining. Despite the effectiveness in general-domain benchmarks, their applicability to complex medical domain remains largely unexplored. Medical knowledge editing is particularly challenging, as it requires LLMs to internalize the knowledge and generalize to unseen scenarios for effective and interpretable decision-making. In this work, we propose a novel framework called MedEditBench to rigorously evaluate the effectiveness of existing KE methods in the medical domain. In MedEditBench, we introduce a new medical knowledge editing benchmark as well as three different knowledge editing paradigms, which are designed to assess the impact of different knowledge sources for editing. Our findings indicate that current KE methods result in only superficial memorization of the injected information, failing to generalize to new scenarios. To overcome this limitation, we present Self-Generated Rationale Editing (SGR-Edit), which utilizes model-derived rationales as the target knowledge for editing, thereby uncovering the underlying reasoning process and demonstrating significant improvements over existing KE approaches. Additionally, we offer deeper insights into medical knowledge editing, including the localization of medical knowledge in LLMs and the impact of sequential editing on evolving knowledge. This could provide practical guidance for implementing KE methods in real-world medical applications.
Problem

Research questions and friction points this paper is trying to address.

Evaluating medical knowledge editing in LLMs effectively
Addressing superficial memorization in medical knowledge updates
Improving generalization in medical decision-making via SGR-Edit
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces MedEditBench for medical KE evaluation
Proposes SGR-Edit using model-derived rationales
Analyzes medical knowledge localization in LLMs
🔎 Similar Papers
No similar papers found.