A Modular Approach for Clinical SLMs Driven by Synthetic Data with Pre-Instruction Tuning, Model Merging, and Clinical-Tasks Alignment

📅 2025-05-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address bottlenecks in clinical LLM deployment—including high computational overhead, stringent latency requirements, scarcity of real-world medical data, and heightened privacy sensitivity—this paper proposes a lightweight, efficient Small Language Model (SLM) adaptation framework. Methodologically, it innovatively integrates three components: pre-instruction tuning, multi-expert model merging, and clinical task alignment. We construct MediFlow, a synthetic instruction dataset comprising 2.5 million samples spanning 14 medical NLP tasks, and release CLUE+, an extended clinical benchmark. Leveraging diverse, privacy-preserving sources—including PMC, clinical guidelines, and MedWiki—we perform safe adaptation via supervised fine-tuning (SFT), direct preference optimization (DPO), and synthetic data generation. Our model, MediPhi, achieves +64.3%, +49.5%, and +44% improvements over baselines on medical entity recognition, radiology report generation, and ICD-10 coding within CLUE+, outperforming GPT-4-0125 by 14%; task-aligned refinement further boosts average performance by 18.9%.

Technology Category

Application Category

📝 Abstract
High computation costs and latency of large language models such as GPT-4 have limited their deployment in clinical settings. Small language models (SLMs) offer a cost-effective alternative, but their limited capacity requires biomedical domain adaptation, which remains challenging. An additional bottleneck is the unavailability and high sensitivity of clinical data. To address these challenges, we propose a novel framework for adapting SLMs into high-performing clinical models. We introduce the MediPhi collection of 3.8B-parameter SLMs developed with our novel framework: pre-instruction tuning of experts on relevant medical and clinical corpora (PMC, Medical Guideline, MedWiki, etc.), model merging, and clinical-tasks alignment. To cover most clinical tasks, we extended the CLUE benchmark to CLUE+, doubling its size. Our expert models deliver relative improvements on this benchmark over the base model without any task-specific fine-tuning: 64.3% on medical entities, 49.5% on radiology reports, and 44% on ICD-10 coding (outperforming GPT-4-0125 by 14%). We unify the expert models into MediPhi via model merging, preserving gains across benchmarks. Furthermore, we built the MediFlow collection, a synthetic dataset of 2.5 million high-quality instructions on 14 medical NLP tasks, 98 fine-grained document types, and JSON format support. Alignment of MediPhi using supervised fine-tuning and direct preference optimization achieves further gains of 18.9% on average.
Problem

Research questions and friction points this paper is trying to address.

High computation costs limit GPT-4 deployment in clinical settings
SLMs require biomedical domain adaptation but face capacity challenges
Clinical data scarcity and sensitivity hinder SLM development
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pre-instruction tuning on medical corpora
Model merging for unified performance
Synthetic data for clinical alignment
🔎 Similar Papers
No similar papers found.
Jean-Philippe Corbeil
Jean-Philippe Corbeil
Microsoft
natural language processingdeep learningmachine learning
Amin Dada
Amin Dada
Institute for AI in Medicine (IKIM), University Hospital Essen
Jean-Michel Attendu
Jean-Michel Attendu
Microsoft
Machine learningNatural Language ProcessingAcousticsSignal Processing
Asma Ben Abacha
Asma Ben Abacha
Microsoft
Artificial IntelligenceNatural Language ProcessingMedical Informatics
Alessandro Sordoni
Alessandro Sordoni
Microsoft Research
Artificial IntelligenceInformation RetrievalDeep Learning
Lucas Caccia
Lucas Caccia
Microsoft Research
Deep LearningContinual LearningNatural Language Processing
F
Franccois Beaulieu
Microsoft Healthcare & Life Sciences
T
Thomas Lin
Microsoft Healthcare & Life Sciences
J
J. Kleesiek
IKIM, University Hospital Essen, Germany; Cancer Research Center Cologne Essen (CCCE), German Cancer Consortium (DKTK, Partner site Essen); Department of Physics of TU Dortmund (Dortmund, Germany)
P
Paul Vozila
Microsoft Healthcare & Life Sciences