🤖 AI Summary
Current medical large language model (LLM) evaluation benchmarks—including MedQA, MedMCQA, PubMedQA, and MMLU—suffer from systemic limitations: poor clinical relevance, low data transparency, insufficient validation rigor, and training data contamination. Method: The authors propose and construct the first standardized evaluation framework explicitly designed for clinical authenticity, systematically auditing existing datasets and pioneering the integration of peer-reviewed medical journal challenge questions into unbiased assessment tools. Contribution/Results: (1) They establish three novel evaluation principles—clinical authenticity, traceability, and isolation; (2) they advocate multi-institutional collaboration to develop safe, stratified, high-fidelity medical evaluation datasets; and (3) they advance medical AI evaluation from a “performance-oriented” paradigm toward a “scientifically trustworthy” one. This framework provides both a methodological foundation and actionable pathway for building robust, clinically reliable AI systems.
📝 Abstract
Objectives: To evaluate the current limitations of large language models (LLMs) in medical question answering, focusing on the quality of datasets used for their evaluation. Materials and Methods: Widely-used benchmark datasets, including MedQA, MedMCQA, PubMedQA, and MMLU, were reviewed for their rigor, transparency, and relevance to clinical scenarios. Alternatives, such as challenge questions in medical journals, were also analyzed to identify their potential as unbiased evaluation tools. Results: Most existing datasets lack clinical realism, transparency, and robust validation processes. Publicly available challenge questions offer some benefits but are limited by their small size, narrow scope, and exposure to LLM training. These gaps highlight the need for secure, comprehensive, and representative datasets. Conclusion: A standardized framework is critical for evaluating LLMs in medicine. Collaborative efforts among institutions and policymakers are needed to ensure datasets and methodologies are rigorous, unbiased, and reflective of clinical complexities.