Addressing Overprescribing Challenges: Fine-Tuning Large Language Models for Medication Recommendation Tasks

📅 2025-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing drug recommendation systems exhibit weak generalization, particularly across heterogeneous electronic health record (EHR) systems and when leveraging unstructured clinical text; large language model (LLM)-based approaches frequently suffer from over-prescription, compromising medication safety. This paper proposes LAMO (Language-Assisted Medication Optimization), a novel framework addressing these limitations. LAMO introduces the first safety-aware overfitting mitigation fine-tuning paradigm for medication recommendation, systematically identifying and alleviating prescription bias inherent in medical LLMs. It integrates parameter-efficient LoRA fine-tuning, semantic modeling of unstructured clinical text, and multi-source EHR alignment to enhance zero-shot generalization to unseen drugs. In internal validation, LAMO achieves >10% higher accuracy than state-of-the-art methods and demonstrates superior robustness and generalizability across temporal extrapolation, cross-institutional, and out-of-distribution drug recommendation tasks.

Technology Category

Application Category

📝 Abstract
Medication recommendation systems have garnered attention within healthcare for their potential to deliver personalized and efficacious drug combinations based on patient's clinical data. However, existing methodologies encounter challenges in adapting to diverse Electronic Health Records (EHR) systems and effectively utilizing unstructured data, resulting in limited generalization capabilities and suboptimal performance. Recently, interest is growing in harnessing Large Language Models (LLMs) in the medical domain to support healthcare professionals and enhance patient care. Despite the emergence of medical LLMs and their promising results in tasks like medical question answering, their practical applicability in clinical settings, particularly in medication recommendation, often remains underexplored. In this study, we evaluate both general-purpose and medical-specific LLMs for medication recommendation tasks. Our findings reveal that LLMs frequently encounter the challenge of overprescribing, leading to heightened clinical risks and diminished medication recommendation accuracy. To address this issue, we propose Language-Assisted Medication Recommendation (LAMO), which employs a parameter-efficient fine-tuning approach to tailor open-source LLMs for optimal performance in medication recommendation scenarios. LAMO leverages the wealth of clinical information within clinical notes, a resource often underutilized in traditional methodologies. As a result of our approach, LAMO outperforms previous state-of-the-art methods by over 10% in internal validation accuracy. Furthermore, temporal and external validations demonstrate LAMO's robust generalization capabilities across various temporal and hospital contexts. Additionally, an out-of-distribution medication recommendation experiment demonstrates LAMO's remarkable accuracy even with medications outside the training data.
Problem

Research questions and friction points this paper is trying to address.

Overprescribing challenges in medication recommendation systems.
Limited generalization in adapting to diverse EHR systems.
Underutilization of unstructured clinical data in existing methods.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tunes LLMs for medication recommendations
Uses clinical notes for enhanced accuracy
Improves generalization across diverse EHR systems
🔎 Similar Papers
No similar papers found.
Z
Zihao Zhao
University of Science and Technology of China, Hefei, 230026, China
Chenxiao Fan
Chenxiao Fan
Johns Hopkins University
C
Chongming Gao
University of Science and Technology of China, Hefei, 230026, China
F
Fuli Feng
University of Science and Technology of China, Hefei, 230026, China
Xiangnan He
Xiangnan He
University of Science and Technology of China
RecommendationCausalityBig DataInformation RetrievalMachine Learning