🤖 AI Summary
Existing LLM evaluations in healthcare predominantly rely on exam-style multiple-choice questions, failing to reflect the holistic clinical decision-making demands of general practitioners (GPs). Method: We introduce the first fine-grained, scenario-based benchmark tailored to routine GP practice, grounded in a comprehensive general medicine competency framework. It integrates knowledge-oriented multiple-choice items with expert-annotated clinical vignettes, covering ten core competencies—including disease staging, complication recognition, therapeutic nuance, and medication guideline adherence. Innovations include multidimensional clinical management annotations, real-world scenario modeling, and a hierarchical scoring mechanism. Results: Empirical evaluation of leading LLMs reveals significant deficiencies across all ten competencies, indicating that current models lack the reliability required for unsupervised, autonomous GP-level clinical decision support.
📝 Abstract
General practitioners (GPs) serve as the cornerstone of primary healthcare systems by providing continuous and comprehensive medical services. However, due to community-oriented nature of their practice, uneven training and resource gaps, the clinical proficiency among GPs can vary significantly across regions and healthcare settings. Currently, Large Language Models (LLMs) have demonstrated great potential in clinical and medical applications, making them a promising tool for supporting general practice. However, most existing benchmarks and evaluation frameworks focus on exam-style assessments-typically multiple-choice question-lack comprehensive assessment sets that accurately mirror the real-world scenarios encountered by GPs. To evaluate how effectively LLMs can make decisions in the daily work of GPs, we designed GPBench, which consists of both test questions from clinical practice and a novel evaluation framework. The test set includes multiple-choice questions that assess fundamental knowledge of general practice, as well as realistic, scenario-based problems. All questions are meticulously annotated by experts, incorporating rich fine-grained information related to clinical management. The proposed LLM evaluation framework is based on the competency model for general practice, providing a comprehensive methodology for assessing LLM performance in real-world settings. As the first large-model evaluation set targeting GP decision-making scenarios, GPBench allows us to evaluate current mainstream LLMs. Expert assessment and evaluation reveal that in areas such as disease staging, complication recognition, treatment detail, and medication usage, these models exhibit at least ten major shortcomings. Overall, existing LLMs are not yet suitable for independent use in real-world GP working scenarios without human oversight.