AlpaCare: Instruction-tuned Large Language Models for Medical Application

📅 2023-10-23
🏛️ arXiv.org
📈 Citations: 51
Influential: 6
📄 PDF
🤖 AI Summary
To address the limited diversity and narrow task coverage of existing biomedical instruction datasets—which constrain large language models’ (LLMs’) medical instruction-following capability and cross-domain generalization—this paper proposes an expert-guided, high-diversity machine-synthesis paradigm to construct MedInstruct-52k, a high-quality medical instruction dataset. Built via collaborative generation using GPT-4 and ChatGPT anchored on an expert-curated seed set, MedInstruct-52k enables effective instruction tuning of LLaMA-family models. Our approach achieves substantial improvements: up to +38.1% absolute gain in accuracy on medical free-form instructions, +6.7% average improvement on general benchmarks, and consistent superiority over state-of-the-art methods in human evaluations of correctness and practical utility. Critically, this work establishes a novel paradigm wherein a compact, high-quality domain-specific dataset drives dual excellence—both general-purpose competence and domain expertise—without requiring massive-scale data or model parameters.
📝 Abstract
Instruction-finetuning (IFT) has become crucial in aligning Large Language Models (LLMs) with diverse human needs and has shown great potential in medical applications. However, previous studies mainly fine-tune LLMs on biomedical datasets with limited diversity, which often rely on benchmarks or narrow task scopes, and hence significantly limit the effectiveness on their medical instruction-following ability and generalizability. To bridge this gap, we propose creating a diverse, machine-generated medical IFT dataset, MedInstruct-52k, using GPT-4 and ChatGPT with a high-quality expert-curated seed set. We then fine-tune LLaMA-series models on the dataset to develop AlpaCare. Despite using a smaller domain-specific dataset than previous medical LLMs, AlpaCare not only demonstrates superior performance on medical applications, with up to 38.1% absolute gain over best baselines in medical free-form instruction evaluations, but also achieves 6.7% absolute gains averaged over multiple general domain benchmarks. Human evaluation further shows that AlpaCare consistently outperforms best baselines in terms of both correctness and helpfulness. We offer public access to our data, model, and codebase in https://github.com/XZhang97666/AlpaCare.
Problem

Research questions and friction points this paper is trying to address.

Enhancing medical instruction-following ability of LLMs
Improving generalizability of medical LLMs
Creating diverse medical IFT dataset for fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diverse machine-generated medical IFT dataset
Fine-tuned LLaMA-series models for AlpaCare
Superior performance on medical applications
🔎 Similar Papers
No similar papers found.
Xinlu Zhang
Xinlu Zhang
University of California, Santa Barbara
Machine LearningNatural Language ProcessingTime Series ModelingMultimodal Learning
C
Chenxin Tian
Chinese Academy of Medical Sciences and Peking Union Medical College
X
Xianjun Yang
University of California, Santa Barbara
Lichang Chen
Lichang Chen
University of Maryland
AI AlignmentOmni-ModalityReasoning
Z
Zekun Li
University of California, Santa Barbara
L
Linda R. Petzold
University of California, Santa Barbara