Assessing the Limitations of Large Language Models in Clinical Fact Decomposition

📅 2024-12-17
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Clinical fact decomposition—breaking down complex clinical statements into verifiable atomic facts—is critical for ensuring the safety of LLM-based medical applications; however, existing work lacks systematic investigation tailored to terminology-rich, multi-source heterogeneous electronic health records (EHRs). This paper introduces FactEHR, the first full-document-level clinical fact decomposition benchmark, comprising 2,168 clinical notes from four note types across multiple hospitals. We conduct the first systematic evaluation of mainstream LLMs on this task, revealing up to a 2.6× disparity in the number of generated atomic facts across models—highlighting their unreliability. To ensure rigor, we design a fine-grained decomposition protocol integrating clinical expert collaboration and multi-dimensional quality assessment. Our key contributions include: (1) releasing the first open-source clinical fact decomposition dataset, (2) an accompanying evaluation framework and codebase, thereby filling a foundational gap in medical fact verification resources and advancing trustworthy LLM research in healthcare.

Technology Category

Application Category

📝 Abstract
Verifying factual claims is critical for using large language models (LLMs) in healthcare. Recent work has proposed fact decomposition, which uses LLMs to rewrite source text into concise sentences conveying a single piece of information, as an approach for fine-grained fact verification. Clinical documentation poses unique challenges for fact decomposition due to dense terminology and diverse note types. To explore these challenges, we present FactEHR, a dataset consisting of full document fact decompositions for 2,168 clinical notes spanning four types from three hospital systems. Our evaluation, including review by clinicians, highlights significant variability in the quality of fact decomposition for four commonly used LLMs, with some LLMs generating 2.6x more facts per sentence than others. The results underscore the need for better LLM capabilities to support factual verification in clinical text. To facilitate future research in this direction, we plan to release our code at url{https://github.com/som-shahlab/factehr}.
Problem

Research questions and friction points this paper is trying to address.

Evaluating factuality in clinical notes using LLMs
Decomposing complex clinical statements into atomic facts
Addressing challenges in clinical documentation for fact verification
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based fact decomposition for clinical notes
FactEHR dataset with 987,266 entailment pairs
Clinician-reviewed evaluation of LLM performance
🔎 Similar Papers
No similar papers found.
M
Monica Munnangi
Khoury College of Computer Sciences, Northeastern University
A
Akshay Swaminathan
Department of Biomedical Data Science, Stanford School of Medicine
J
J. Fries
Center for Biomedical Informatics Research, Stanford University
Jenelle Jindal
Jenelle Jindal
Stanford University
S
Sanjana Narayanan
Center for Biomedical Informatics Research, Stanford University
Iván López
Iván López
Department of Biomedical Data Science, Stanford School of Medicine
L
Lucia Tu
Center for Biomedical Informatics Research, Stanford University
Philip Chung
Philip Chung
Department of Anesthesiology, Perioperative & Pain Medicine, Stanford School of Medicine
J
J. Omiye
Department of Dermatology, Stanford School of Medicine
M
Mehr Kashyap
Department of Biomedical Data Science, Stanford School of Medicine
N
Nigam H. Shah
Department of Medicine, Clinical Excellence Research Center, Technology and Digital Solutions, Stanford Health Care