Facts Fade Fast: Evaluating Memorization of Outdated Medical Knowledge in Large Language Models

📅 2025-09-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Medical knowledge evolves continuously, yet large language models (LLMs) rely on static pretraining data, leading to persistent reliance on outdated clinical consensus and inaccurate recommendations. To address this, we introduce two novel, systematic-review–based dynamic medical QA benchmarks: MedRevQA—designed to evaluate models’ ability to revise knowledge following consensus updates—and MedChangeQA—to assess reasoning consistency amid clinical guideline revisions. We systematically evaluate eight state-of-the-art LLMs across these evolving-knowledge scenarios. Results reveal pervasive anchoring effects toward obsolete knowledge; both pretraining data recency and fine-tuning strategies critically influence models’ capacity for timely knowledge adaptation. This work establishes the first dedicated benchmarks for evaluating temporal validity in medical AI, proposes a rigorous methodology for assessing knowledge evolution, and provides reproducible empirical evidence to guide the development of time-aware, clinically reliable LLMs.

Technology Category

Application Category

📝 Abstract
The growing capabilities of Large Language Models (LLMs) show significant potential to enhance healthcare by assisting medical researchers and physicians. However, their reliance on static training data is a major risk when medical recommendations evolve with new research and developments. When LLMs memorize outdated medical knowledge, they can provide harmful advice or fail at clinical reasoning tasks. To investigate this problem, we introduce two novel question-answering (QA) datasets derived from systematic reviews: MedRevQA (16,501 QA pairs covering general biomedical knowledge) and MedChangeQA (a subset of 512 QA pairs where medical consensus has changed over time). Our evaluation of eight prominent LLMs on the datasets reveals consistent reliance on outdated knowledge across all models. We additionally analyze the influence of obsolete pre-training data and training strategies to explain this phenomenon and propose future directions for mitigation, laying the groundwork for developing more current and reliable medical AI systems.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM memorization of outdated medical knowledge in healthcare applications
Assessing risks from static training data when medical recommendations evolve
Analyzing harmful advice from obsolete knowledge in clinical reasoning tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces MedRevQA and MedChangeQA datasets
Evaluates eight LLMs on outdated medical knowledge
Analyzes obsolete pre-training data and strategies
🔎 Similar Papers
No similar papers found.