A dataset and benchmark for hospital course summarization with adapted large language models

📅 2024-03-08
🏛️ J. Am. Medical Informatics Assoc.
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of generating concise, clinically relevant hospital course summaries (BHCs) using large language models (LLMs) to enhance clinical decision support. To this end, we introduce and publicly release MIMIC-IV-BHC—a standardized, human-annotated parallel dataset—establishing the first benchmark for BHC generation. We propose a dual-track evaluation framework integrating automated metrics (BLEU, BERTScore) with expert clinician preference ratings. We systematically compare prompt-based and fine-tuning approaches across five LLMs, including Llama2-13B and GPT-4. Results show that GPT-4 with prompting significantly outperforms human-written summaries in clinician preference tests (*P* < 0.001), while fine-tuned Llama2-13B achieves the highest scores on automated metrics. This work provides (1) a high-quality, open-source dataset; (2) a reproducible, clinically grounded evaluation framework; and (3) practical, model-agnostic adaptation strategies for medical summarization.

Technology Category

Application Category

📝 Abstract
OBJECTIVE Brief hospital course (BHC) summaries are clinical documents that summarize a patient's hospital stay. While large language models (LLMs) depict remarkable capabilities in automating real-world tasks, their capabilities for healthcare applications such as synthesizing BHCs from clinical notes have not been shown. We introduce a novel preprocessed dataset, the MIMIC-IV-BHC, encapsulating clinical note and BHC pairs to adapt LLMs for BHC synthesis. Furthermore, we introduce a benchmark of the summarization performance of 2 general-purpose LLMs and 3 healthcare-adapted LLMs. MATERIALS AND METHODS Using clinical notes as input, we apply prompting-based (using in-context learning) and fine-tuning-based adaptation strategies to 3 open-source LLMs (Clinical-T5-Large, Llama2-13B, and FLAN-UL2) and 2 proprietary LLMs (Generative Pre-trained Transformer [GPT]-3.5 and GPT-4). We evaluate these LLMs across multiple context-length inputs using natural language similarity metrics. We further conduct a clinical study with 5 clinicians, comparing clinician-written and LLM-generated BHCs across 30 samples, focusing on their potential to enhance clinical decision-making through improved summary quality. We compare reader preferences for the original and LLM-generated summary using Wilcoxon signed-rank tests. We further request optional qualitative feedback from clinicians to gain deeper insights into their preferences, and we present the frequency of common themes arising from these comments. RESULTS The Llama2-13B fine-tuned LLM outperforms other domain-adapted models given quantitative evaluation metrics of Bilingual Evaluation Understudy (BLEU) and Bidirectional Encoder Representations from Transformers (BERT)-Score. GPT-4 with in-context learning shows more robustness to increasing context lengths of clinical note inputs than fine-tuned Llama2-13B. Despite comparable quantitative metrics, the reader study depicts a significant preference for summaries generated by GPT-4 with in-context learning compared to both Llama2-13B fine-tuned summaries and the original summaries (P<.001), highlighting the need for qualitative clinical evaluation. DISCUSSION AND CONCLUSION We release a foundational clinically relevant dataset, the MIMIC-IV-BHC, and present an open-source benchmark of LLM performance in BHC synthesis from clinical notes. We observe high-quality summarization performance for both in-context proprietary and fine-tuned open-source LLMs using both quantitative metrics and a qualitative clinical reader study. Our research effectively integrates elements from the data assimilation pipeline: our methods use (1) clinical data sources to integrate, (2) data translation, and (3) knowledge creation, while our evaluation strategy paves the way for (4) deployment.
Problem

Research questions and friction points this paper is trying to address.

Adapting LLMs for hospital course summarization from clinical notes
Evaluating LLM performance in synthesizing brief hospital course summaries
Assessing clinical utility of LLM-generated summaries for decision-making
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapted LLMs for hospital course summarization
Used prompting and fine-tuning strategies
Evaluated with clinical and quantitative metrics
🔎 Similar Papers
No similar papers found.
A
Asad Aali
Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX, USA
Dave Van Veen
Dave Van Veen
PhD Student, Stanford University
Machine LearningLarge Language ModelsComputational Imaging
Y
Y. Arefeen
Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX, USA
J
Jason Hom
Department of Medicine, Stanford, CA, USA
Christian Bluethgen
Christian Bluethgen
Radiologist, Clinician Scientist, USZ Zurich, AIMI Center, Stanford University
RadiologyThoracic ImagingMultimodal Machine Learning
E
E. Reis
Albert Einstein Israelite Hospital, S ˜ao Paulo, Brazil; Center for Artificial Intelligence in Medicine and Imaging, Palo Alto, CA, USA
S
S. Gatidis
Department of Radiology, Stanford University, Stanford, CA, USA; Center for Artificial Intelligence in Medicine and Imaging, Palo Alto, CA, USA
N
Namuun Clifford
School of Nursing, The University of Texas at Austin, Austin, TX, USA
J
Joseph Daws
One Medical, San Francisco, CA, USA
A
A. S. Tehrani
One Medical, San Francisco, CA, USA
J
Jangwon Kim
Amazon, Seattle, WA, USA
A
Akshay S. Chaudhari
Department of Radiology, Stanford University, Stanford, CA, USA; Department of Biomedical Data Science, Stanford University, Stanford, CA, USA; Center for Artificial Intelligence in Medicine and Imaging, Palo Alto, CA, USA