Scalable Medication Extraction and Discontinuation Identification from Electronic Health Records Using Large Language Models

📅 2025-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the dual challenge of drug information extraction and discontinuation status identification from unstructured electronic health record (EHR) text. We conduct the first systematic evaluation of 12 open-source and commercial large language models (LLMs) on these joint tasks under zero-shot and few-shot settings. Using a multi-dataset benchmark—including Re-CASI and MIV-Med—and chain-of-thought prompting, we assess performance across extraction and classification subtasks. Contrary to expectations, general-purpose LLMs significantly outperform domain-specific fine-tuned models. GPT-4o achieves 94.0% F1 for drug extraction, 78.1% accuracy for discontinuation classification, and 72.7% F1 on the joint task in zero-shot mode. Notably, the open-weight Llama-3.1-70B attains 76.2% joint-task F1—comparable to GPT-4o—demonstrating the viability and practical potential of high-performing open models for large-scale, annotation-free clinical medication safety monitoring.

Technology Category

Application Category

📝 Abstract
Identifying medication discontinuations in electronic health records (EHRs) is vital for patient safety but is often hindered by information being buried in unstructured notes. This study aims to evaluate the capabilities of advanced open-sourced and proprietary large language models (LLMs) in extracting medications and classifying their medication status from EHR notes, focusing on their scalability on medication information extraction without human annotation. We collected three EHR datasets from diverse sources to build the evaluation benchmark. We evaluated 12 advanced LLMs and explored multiple LLM prompting strategies. Performance on medication extraction, medication status classification, and their joint task (extraction then classification) was systematically compared across all experiments. We found that LLMs showed promising performance on the medication extraction and discontinuation classification from EHR notes. GPT-4o consistently achieved the highest average F1 scores in all tasks under zero-shot setting - 94.0% for medication extraction, 78.1% for discontinuation classification, and 72.7% for the joint task. Open-sourced models followed closely, Llama-3.1-70B-Instruct achieved the highest performance in medication status classification on the MIV-Med dataset (68.7%) and in the joint task on both the Re-CASI (76.2%) and MIV-Med (60.2%) datasets. Medical-specific LLMs demonstrated lower performance compared to advanced general-domain LLMs. Few-shot learning generally improved performance, while CoT reasoning showed inconsistent gains. LLMs demonstrate strong potential for medication extraction and discontinuation identification on EHR notes, with open-sourced models offering scalable alternatives to proprietary systems and few-shot can further improve LLMs' capability.
Problem

Research questions and friction points this paper is trying to address.

Extracting medications from unstructured EHR notes efficiently
Identifying medication discontinuations for improved patient safety
Evaluating LLMs' scalability in medication data processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses large language models for EHR data extraction
Evaluates 12 LLMs on medication status classification
Leverages zero-shot and few-shot learning techniques
🔎 Similar Papers
No similar papers found.
C
Chong Shao
Harvard T.H. Chan School of Public Health, Harvard University, Boston, MA, USA
D
Doug Snyder
Harvard T.H. Chan School of Public Health, Harvard University, Boston, MA, USA
C
Chiran Li
Harvard T.H. Chan School of Public Health, Harvard University, Boston, MA, USA
B
Bowen Gu
Division of Pharmacoepidemiology and Pharmacoeconomics, Department of Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
K
Kerry Ngan
Division of Pharmacoepidemiology and Pharmacoeconomics, Department of Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
C
Chun-Ting Yang
Division of Pharmacoepidemiology and Pharmacoeconomics, Department of Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
Jiageng Wu
Jiageng Wu
Harvard University
Public healthDigital healthcare
R
R. Wyss
Division of Pharmacoepidemiology and Pharmacoeconomics, Department of Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
Kueiyu Joshua Lin
Kueiyu Joshua Lin
Harvard Medical School
J
Jie Yang
Division of Pharmacoepidemiology and Pharmacoeconomics, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA; Broad Institute of MIT and Harvard, Cambridge, MA, USA; Harvard Data Science Initiative, Harvard University, Cambridge, MA, USA