Countering Catastrophic Forgetting of Large Language Models for Better Instruction Following via Weight-Space Model Merging

📅 2026-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of catastrophic forgetting in large language models when fine-tuned for medical applications, which often leads to a degradation of general instruction-following capabilities and hinders clinical deployment. To mitigate this issue with minimal supervisory data, the authors propose an efficient, low-resource weight-space interpolation method that merges a clinical foundation model (GatorTronLlama) with a general-purpose instruction-tuned model (Llama-3.1-8B-Instruct). The resulting fused model effectively preserves both domain-specific medical competence and broad instruction understanding. Empirical evaluations across multiple medical benchmarks and clinical generation tasks demonstrate performance comparable to fully fine-tuned models, while substantially improving training efficiency and practical deployability.
📝 Abstract
Large language models have been adopted in the medical domain for clinical documentation to reduce clinician burden. However, studies have reported that LLMs often "forget" a significant amount of instruction-following ability when fine-tuned using a task-specific medical dataset, a critical challenge in adopting general-purpose LLMs for clinical applications. This study presents a model merging framework to efficiently adapt general-purpose LLMs to the medical domain by countering this forgetting issue. By merging a clinical foundation model (GatorTronLlama) with a general instruct model (Llama-3.1-8B-Instruct) via interpolation-based merge methods, we seek to derive a domain-adapted model with strong performance on clinical tasks while retaining instruction-following ability. Comprehensive evaluation across medical benchmarks and five clinical generation tasks (e.g., radiology and discharge summarization) shows that merged models can effectively mitigate catastrophic forgetting, preserve clinical domain expertise, and retain instruction-following ability. In addition, our model merging strategies demonstrate training efficiency, achieving performance on par with fully fine-tuned baselines under severely constrained supervision (e.g., 64-shot vs. 256-shot). Consequently, weight-space merging constitutes a highly scalable solution for adapting open-source LLMs to clinical applications, facilitating broader deployment in resource-constrained healthcare environments.
Problem

Research questions and friction points this paper is trying to address.

catastrophic forgetting
instruction following
large language models
domain adaptation
clinical applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

model merging
catastrophic forgetting
instruction following
domain adaptation
weight-space interpolation
🔎 Similar Papers
No similar papers found.
M
Mengxian Lyu
Department of Health Outcomes and Biomedical Informatics, College of Medicine, University of Florida, Gainesville, FL, USA
C
Cheng Peng
Department of Health Outcomes and Biomedical Informatics, College of Medicine, University of Florida, Gainesville, FL, USA
Z
Ziyi Chen
Department of Health Outcomes and Biomedical Informatics, College of Medicine, University of Florida, Gainesville, FL, USA
M
Mengyuan Zhang
Department of Health Outcomes and Biomedical Informatics, College of Medicine, University of Florida, Gainesville, FL, USA
J
Jieting Li Lu
Department of Engineering Education, Herbert Wertheim College of Engineering, University of Florida, Gainesville, FL, USA
Yonghui Wu
Yonghui Wu
Associate Professor, University of Florida
Natural Language ProcessingMachine LearningMedical InformaticsPharmacovigilance