A Study of Large Language Models for Patient Information Extraction: Model Architecture, Fine-Tuning Strategy, and Multi-task Instruction Tuning

📅 2025-09-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of improving robustness and generalizability of large language models (LLMs) for patient information extraction from clinical narratives. To this end, we systematically compare encoder-only (BERT, GatorTron) and decoder-only (Llama 3.1) architectures for concept and relation extraction, and propose a unified framework integrating prompt-based parameter-efficient fine-tuning (LoRA) with multi-task instruction tuning. Our method significantly enhances transfer performance in few-shot and zero-shot settings. Evaluated across five benchmark clinical datasets, results show that decoder-only models are better suited for generative extraction tasks; multi-task instruction tuning yields an average F1 improvement of 4.2%; and LoRA achieves 92% of full-parameter fine-tuning performance while updating only 0.3% of model parameters. These findings provide both methodological guidance and empirical evidence for developing lightweight, generalizable, and clinically deployable information extraction systems.

Technology Category

Application Category

📝 Abstract
Natural language processing (NLP) is a key technology to extract important patient information from clinical narratives to support healthcare applications. The rapid development of large language models (LLMs) has revolutionized many NLP tasks in the clinical domain, yet their optimal use in patient information extraction tasks requires further exploration. This study examines LLMs' effectiveness in patient information extraction, focusing on LLM architectures, fine-tuning strategies, and multi-task instruction tuning techniques for developing robust and generalizable patient information extraction systems. This study aims to explore key concepts of using LLMs for clinical concept and relation extraction tasks, including: (1) encoder-only or decoder-only LLMs, (2) prompt-based parameter-efficient fine-tuning (PEFT) algorithms, and (3) multi-task instruction tuning on few-shot learning performance. We benchmarked a suite of LLMs, including encoder-based LLMs (BERT, GatorTron) and decoder-based LLMs (GatorTronGPT, Llama 3.1, GatorTronLlama), across five datasets. We compared traditional full-size fine-tuning and prompt-based PEFT. We explored a multi-task instruction tuning framework that combines both tasks across four datasets to evaluate the zero-shot and few-shot learning performance using the leave-one-dataset-out strategy.
Problem

Research questions and friction points this paper is trying to address.

Optimizing LLM architectures for clinical information extraction
Evaluating fine-tuning strategies for patient data processing
Assessing multi-task instruction tuning in few-shot learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Encoder-only and decoder-only LLM architectures
Prompt-based parameter-efficient fine-tuning algorithms
Multi-task instruction tuning for few-shot learning
🔎 Similar Papers
No similar papers found.
C
Cheng Peng
Department of Health Outcomes and Biomedical Informatics, College of Medicine, University of Florida, Gainesville, Florida, USA
Xinyu Dong
Xinyu Dong
Selfiie Corporation
Machine LearningLLMBioinformatics
M
Mengxian Lyu
Department of Health Outcomes and Biomedical Informatics, College of Medicine, University of Florida, Gainesville, Florida, USA
D
Daniel Paredes
Department of Health Outcomes and Biomedical Informatics, College of Medicine, University of Florida, Gainesville, Florida, USA
Yaoyun Zhang
Yaoyun Zhang
Selfii
Y
Yonghui Wu
Department of Health Outcomes and Biomedical Informatics, College of Medicine, University of Florida, Gainesville, Florida, USA; Preston A. Wells, Jr. Center for Brain Tumor Therapy, Lillian S. Wells Department of Neurosurgery, University of Florida, Gainesville, Florida, USA