Medical Hallucinations in Foundation Models and Their Impact on Healthcare

📅 2025-02-26
🏛️ arXiv.org
📈 Citations: 41
Influential: 0
📄 PDF
🤖 AI Summary
Medical foundation models may generate “hallucinations”—factual, logical, or evidence-inconsistent errors—that jeopardize clinical decision-making and patient safety. To address this, we first propose a multidimensional taxonomy of medical hallucinations and establish a real-world, clinician-annotated benchmark dataset derived from authentic clinical cases; we further validate its clinical impact via an international physician survey. Methodologically, we integrate expert annotation, empirical behavioral surveys, and large language model (LLM) evaluation to systematically assess the efficacy of chain-of-thought (CoT) reasoning and retrieval-augmented generation (RAG) in mitigating hallucinations. Results show both techniques significantly reduce hallucination rates, yet residual hallucinations remain clinically hazardous. Building on these findings, we introduce a patient-safety-centered AI governance and ethics framework, offering theoretical foundations and actionable pathways for responsible deployment of medical AI. (149 words)

Technology Category

Application Category

📝 Abstract
Foundation Models that are capable of processing and generating multi-modal data have transformed AI's role in medicine. However, a key limitation of their reliability is hallucination, where inaccurate or fabricated information can impact clinical decisions and patient safety. We define medical hallucination as any instance in which a model generates misleading medical content. This paper examines the unique characteristics, causes, and implications of medical hallucinations, with a particular focus on how these errors manifest themselves in real-world clinical scenarios. Our contributions include (1) a taxonomy for understanding and addressing medical hallucinations, (2) benchmarking models using medical hallucination dataset and physician-annotated LLM responses to real medical cases, providing direct insight into the clinical impact of hallucinations, and (3) a multi-national clinician survey on their experiences with medical hallucinations. Our results reveal that inference techniques such as Chain-of-Thought (CoT) and Search Augmented Generation can effectively reduce hallucination rates. However, despite these improvements, non-trivial levels of hallucination persist. These findings underscore the ethical and practical imperative for robust detection and mitigation strategies, establishing a foundation for regulatory policies that prioritize patient safety and maintain clinical integrity as AI becomes more integrated into healthcare. The feedback from clinicians highlights the urgent need for not only technical advances but also for clearer ethical and regulatory guidelines to ensure patient safety. A repository organizing the paper resources, summaries, and additional information is available at https://github.com/mitmedialab/medical hallucination.
Problem

Research questions and friction points this paper is trying to address.

Evaluating medical hallucinations in foundation models across clinical reasoning tasks
Assessing factual inaccuracies in AI outputs that could alter clinical decisions
Comparing hallucination rates between general-purpose and medical-specialized AI models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Chain-of-thought prompting reduces hallucinations through reasoning
Self-verification mechanisms detect errors via explicit reasoning traces
Large-scale pre-training enables broader knowledge integration for safety
🔎 Similar Papers
No similar papers found.
Y
Y. Kim
Massachusetts Institute of Technology
H
H. Jeong
Massachusetts Institute of Technology
S
Shan Chen
Harvard Medical School
Shuyue Stella Li
Shuyue Stella Li
University of Washington
Natural Language ProcessingComputational LinguisticsArtificial Intelligence
M
M. Lu
University of Washington
K
K. Alhamoud
Massachusetts Institute of Technology
J
J. Mun
Carnegie Mellon University
C
C. Grau
Massachusetts Institute of Technology
M
M. Jung
Massachusetts Institute of Technology
R
R. Gameiro
Massachusetts Institute of Technology
L
L. Fan
Harvard Medical School
E
Eugene W Park
Massachusetts Institute of Technology
T
Tristan Lin
Johns Hopkins University
J
J. Yoon
Seoul National University Hospital
W
W. Yoon
Harvard Medical School
M
M. Sap
Carnegie Mellon University
Y
Y. Tsvetkov
University of Washington
P
P. Liang
Massachusetts Institute of Technology
Xuhai Xu
Xuhai Xu
Assistant Professor, Columbia University | Google
Human-Computer InteractionUbiquitous ComputingHuman-Centered AImHealthHealth Informatics
X
Xin Liu
Google Research
D
D. McDuff
Google Research
Hyeonhoon Lee
Hyeonhoon Lee
Seoul National University Hospital
Clinical informaticsBiosignal processingReinforcement learningLanguage modelsAgent
Hae Won Park
Hae Won Park
MIT
Human-robot InteractionArtificial IntelligenceSocially Interactive AgentsConversational AgentsEmbodied Social Intellige
S
S. Tulebaev
Harvard Medical School
Cynthia Breazeal
Cynthia Breazeal
Professor Media Arts and Sciences, MIT Media Lab
Social RoboticsArtificial IntelligenceHuman-Computer InteractionAI Literacy