47B Mixture-of-Experts Beats 671B Dense Models on Chinese Medical Examinations

📅 2025-11-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates large language models (LLMs) on Chinese medical licensing examinations, focusing on clinical reasoning capabilities across seven medical specialties and two difficulty levels (attending- and senior-physician-tier). Method: We construct the first comprehensive, finely annotated, and standardized Chinese medical benchmark (2,800 questions) and conduct rigorous evaluation of 27 state-of-the-art LLMs. Contribution/Results: We reveal—for the first time—that Mixture-of-Experts (MoE) architectures (e.g., Mixtral-8x7B) substantially outperform ultra-large dense models (e.g., DeepSeek-R1-671B), with no positive correlation between parameter count and medical task performance. Significant inter-specialty performance disparities are observed. Our multi-level difficulty annotation schema, specialty-specific fine-grained categorization, and lightweight MoE inference framework advance medical LLM evaluation paradigms. Mixtral-8x7B achieves the highest accuracy (74.25%), surpassing the 671B dense model by over 10 percentage points, while exhibiting minimal performance degradation (<1%) on senior-tier questions—demonstrating exceptional generalization.

Technology Category

Application Category

📝 Abstract
The rapid advancement of large language models(LLMs) has prompted significant interest in their potential applications in medical domains. This paper presents a comprehensive benchmark evaluation of 27 state-of-the-art LLMs on Chinese medical examination questions, encompassing seven medical specialties across two professional levels. We introduce a robust evaluation framework that assesses model performance on 2,800 carefully curated questions from cardiovascular, gastroenterology, hematology, infectious diseases, nephrology, neurology, and respiratory medicine domains. Our dataset distinguishes between attending physician and senior physician difficulty levels, providing nuanced insights into model capabilities across varying complexity. Our empirical analysis reveals substantial performance variations among models, with Mixtral-8x7B achieving the highest overall accuracy of 74.25%, followed by DeepSeek-R1-671B at 64.07%. Notably, we observe no consistent correlation between model size and performance, as evidenced by the strong performance of smaller mixture-of-experts architectures. The evaluation demonstrates significant performance gaps between medical specialties, with models generally performing better on cardiovascular and neurology questions compared to gastroenterology and nephrology domains. Furthermore, our analysis indicates minimal performance degradation between attending and senior physician levels for top-performing models, suggesting robust generalization capabilities. This benchmark provides critical insights for the deployment of LLMs in medical education and clinical decision support systems, highlighting both the promise and current limitations of these technologies in specialized medical contexts.
Problem

Research questions and friction points this paper is trying to address.

Evaluates 27 LLMs on Chinese medical exam questions
Assesses performance across 7 specialties and 2 difficulty levels
Analyzes model size impact and generalization in medical contexts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture-of-Experts architecture outperforms larger dense models
Evaluation framework uses 2,800 curated Chinese medical exam questions
No consistent correlation found between model size and performance
🔎 Similar Papers
No similar papers found.
Chiung-Yi Tseng
Chiung-Yi Tseng
LuxMuse AI
D
Danyang Zhang
AI Agent Lab, Vokram Group, United States
Tianyang Wang
Tianyang Wang
University of Alabama at Birmingham
machine learning (deep learning)computer vision
H
Hongying Luo
AI Agent Lab, Vokram Group, United Kingdom
L
Lu Chen
AI Agent Lab, Vokram Group, United Kingdom
J
Junmin Huang
AI Agent Lab, Vokram Group, United Kingdom
J
Jibin Guan
University of Minnesota, United States
Junfeng Hao
Junfeng Hao
广东医科大学附属医院 血液透析中心 主任医师
肾病 血液透析 血透通路
J
Jun-Jie Song
Imperial College London, United Kingdom
Z
Ziqian Bi
Purdue University, United States