🤖 AI Summary
The medical domain lacks high-quality, domain-specific large language models (LLMs) pretrained from scratch, primarily due to scarcity of high-fidelity, expert-annotated medical data and the intrinsic complexity of clinical knowledge. To address this gap, we introduce Baichuan-M1-14B—the first open-source, medical-dedicated LLM fully pretrained from scratch, departing from the conventional paradigm of fine-tuning general-purpose foundation models. Leveraging a meticulously curated, multi-source corpus of 20 trillion tokens spanning clinical guidelines, peer-reviewed literature, electronic health records, and biomedical ontologies, Baichuan-M1-14B employs full-scale pretraining with cross-domain capability co-optimization. This design preserves state-of-the-art performance in general competencies—including mathematical reasoning and code generation—while achieving substantial gains in medical question answering, diagnostic inference, and clinical decision support. Empirical evaluation demonstrates consistent superiority over all existing medical LLMs across standardized benchmarks.
📝 Abstract
The current generation of large language models (LLMs) is typically designed for broad, general-purpose applications, while domain-specific LLMs, especially in vertical fields like medicine, remain relatively scarce. In particular, the development of highly efficient and practical LLMs for the medical domain is challenging due to the complexity of medical knowledge and the limited availability of high-quality data. To bridge this gap, we introduce Baichuan-M1, a series of large language models specifically optimized for medical applications. Unlike traditional approaches that simply continue pretraining on existing models or apply post-training to a general base model, Baichuan-M1 is trained from scratch with a dedicated focus on enhancing medical capabilities. Our model is trained on 20 trillion tokens and incorporates a range of effective training methods that strike a balance between general capabilities and medical expertise. As a result, Baichuan-M1 not only performs strongly across general domains such as mathematics and coding but also excels in specialized medical fields. We have open-sourced Baichuan-M1-14B, a mini version of our model, which can be accessed through the following links.