๐ค AI Summary
Existing evaluations of medical large language models (LLMs) rely on static examinations (e.g., USMLE), failing to capture the dynamic, interactive decision-making demands of real-world clinical practice.
Method: We propose the first dynamic validation framework for medical decision-making, integrating a high-fidelity patient simulator with a clinical scoring generator to establish a closed-loop interactive evaluation environment. Our approach introduces a multidimensional dynamic assessment system and employs a 32B-parameter enhanced reasoning model trained via a modified Group Relative Policy Optimization (GRPO) algorithm through large-scale interactive reinforcement learning.
Contribution/Results: Our framework substantially overcomes limitations of static benchmarks: on HealthBench, it surpasses all open-source models and most closed-source models; its Hard subset score exceeds 32โsetting a new state-of-the-art for open-source medical LLMsโand achieves performance approaching that of GPT-5, thereby establishing the current Pareto frontier of performance versus scale in medical LLMs.
๐ Abstract
As large language models (LLMs) advance in conversational and reasoning capabilities, their practical application in healthcare has become a critical research focus. However, there is a notable gap between the performance of medical LLMs on static benchmarks such as USMLE and their utility in real-world clinical decision-making. This discrepancy arises because traditional exams fail to capture the dynamic, interactive nature of medical consultations. To address this challenge, we introduce a novel dynamic verification framework that moves beyond static answer verifier, establishing a large-scale, high-fidelity interactive reinforcement learning system. Our framework comprises two key components: a Patient Simulator that creates realistic clinical environments using de-identified medical records, and a Clinical Rubrics Generator that dynamically produces multi-dimensional evaluation metrics. Building on this foundation, we develop Baichuan-M2, a 32B-parameter medical augmented reasoning model trained through a multi-stage reinforcement learning strategy with an improved Group Relative Policy Optimization (GRPO) algorithm. Evaluated on HealthBench, Baichuan-M2 outperforms all other open-source models and most advanced closed-source counterparts, achieving a score above 32 on the challenging HealthBench Hard benchmark-previously exceeded only by GPT-5. Our work demonstrates that robust dynamic verifier system is essential for aligning LLM capabilities with practical clinical applications, establishing a new Pareto front in the performance-parameter trade-off for medical AI deployment.