đ€ AI Summary
To address privacy preservation and on-premises deployment requirements in healthcare, this work introduces the first large-scale open-source medical dialogue dataset (160K+ high-quality samples) and releases a corresponding family of fine-tuned models. Methodologically, we build upon the LLaMA/Alpaca architecture, employing supervised fine-tuning augmented with medical instruction refinementâleveraging clinical guidelines, authoritative textbooks, and real physician licensure examination questions (e.g., USMLE) to construct structured, clinically grounded dialogues. Our contributions are threefold: (1) bridging the gap in privacy-sensitive, offline-deployable open-weight LLMs for medical applications; (2) establishing a standardized, exam-aligned evaluation benchmark targeting physician competency assessment; and (3) achieving >35% improvement in reasoning accuracy over base models on simulated clinical evaluations, with preliminary HIPAA compliance verificationâenabling secure, controllable deployment in clinical decision support, medical education, and diagnostic assistance.
đ Abstract
As large language models (LLMs) like OpenAI's GPT series continue to make strides, we witness the emergence of artificial intelligence applications in an ever-expanding range of fields. In medicine, these LLMs hold considerable promise for improving medical workflows, diagnostics, patient care, and education. Yet, there is an urgent need for open-source models that can be deployed on-premises to safeguard patient privacy. In our work, we present an innovative dataset consisting of over 160,000 entries, specifically crafted to fine-tune LLMs for effective medical applications. We investigate the impact of fine-tuning these datasets on publicly accessible pre-trained LLMs, and subsequently, we juxtapose the performance of pre-trained-only models against the fine-tuned models concerning the examinations that future medical doctors must pass to achieve certification.