🤖 AI Summary
Existing medical large language model (LLM) evaluation benchmarks suffer from three critical limitations: insufficient clinical authenticity, fragile data management practices, and the absence of safety-oriented metrics. To address these gaps, we propose MedCheck—the first systematic, full-lifecycle evaluation framework for medical LLMs. Methodologically, it decomposes benchmark development into five phases and introduces a dual-purpose (diagnostic-guidance) assessment checklist comprising 46 domain-specific medical criteria. Integrating lifecycle analysis, empirical evaluation, and governance-aware design, MedCheck incorporates multidimensional safety assessments—including clinical alignment, data integrity, model robustness, and uncertainty awareness. Empirical analysis of 53 mainstream medical benchmarks reveals pervasive clinical misalignment, data contamination risks, and systemic omissions in safety evaluation. MedCheck thus provides both theoretical foundations and actionable pathways toward more reliable, transparent, and safe AI evaluation paradigms in healthcare.
📝 Abstract
Large language models (LLMs) show significant potential in healthcare, prompting numerous benchmarks to evaluate their capabilities. However, concerns persist regarding the reliability of these benchmarks, which often lack clinical fidelity, robust data management, and safety-oriented evaluation metrics. To address these shortcomings, we introduce MedCheck, the first lifecycle-oriented assessment framework specifically designed for medical benchmarks. Our framework deconstructs a benchmark's development into five continuous stages, from design to governance, and provides a comprehensive checklist of 46 medically-tailored criteria. Using MedCheck, we conducted an in-depth empirical evaluation of 53 medical LLM benchmarks. Our analysis uncovers widespread, systemic issues, including a profound disconnect from clinical practice, a crisis of data integrity due to unmitigated contamination risks, and a systematic neglect of safety-critical evaluation dimensions like model robustness and uncertainty awareness. Based on these findings, MedCheck serves as both a diagnostic tool for existing benchmarks and an actionable guideline to foster a more standardized, reliable, and transparent approach to evaluating AI in healthcare.