🤖 AI Summary
This work addresses the challenge of catastrophic forgetting in large language models when fine-tuned for medical applications, which often leads to a degradation of general instruction-following capabilities and hinders clinical deployment. To mitigate this issue with minimal supervisory data, the authors propose an efficient, low-resource weight-space interpolation method that merges a clinical foundation model (GatorTronLlama) with a general-purpose instruction-tuned model (Llama-3.1-8B-Instruct). The resulting fused model effectively preserves both domain-specific medical competence and broad instruction understanding. Empirical evaluations across multiple medical benchmarks and clinical generation tasks demonstrate performance comparable to fully fine-tuned models, while substantially improving training efficiency and practical deployability.
📝 Abstract
Large language models have been adopted in the medical domain for clinical documentation to reduce clinician burden. However, studies have reported that LLMs often "forget" a significant amount of instruction-following ability when fine-tuned using a task-specific medical dataset, a critical challenge in adopting general-purpose LLMs for clinical applications. This study presents a model merging framework to efficiently adapt general-purpose LLMs to the medical domain by countering this forgetting issue. By merging a clinical foundation model (GatorTronLlama) with a general instruct model (Llama-3.1-8B-Instruct) via interpolation-based merge methods, we seek to derive a domain-adapted model with strong performance on clinical tasks while retaining instruction-following ability. Comprehensive evaluation across medical benchmarks and five clinical generation tasks (e.g., radiology and discharge summarization) shows that merged models can effectively mitigate catastrophic forgetting, preserve clinical domain expertise, and retain instruction-following ability. In addition, our model merging strategies demonstrate training efficiency, achieving performance on par with fully fine-tuned baselines under severely constrained supervision (e.g., 64-shot vs. 256-shot). Consequently, weight-space merging constitutes a highly scalable solution for adapting open-source LLMs to clinical applications, facilitating broader deployment in resource-constrained healthcare environments.