GPBench: A Comprehensive and Fine-Grained Benchmark for Evaluating Large Language Models as General Practitioners

📅 2025-03-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM evaluations in healthcare predominantly rely on exam-style multiple-choice questions, failing to reflect the holistic clinical decision-making demands of general practitioners (GPs). Method: We introduce the first fine-grained, scenario-based benchmark tailored to routine GP practice, grounded in a comprehensive general medicine competency framework. It integrates knowledge-oriented multiple-choice items with expert-annotated clinical vignettes, covering ten core competencies—including disease staging, complication recognition, therapeutic nuance, and medication guideline adherence. Innovations include multidimensional clinical management annotations, real-world scenario modeling, and a hierarchical scoring mechanism. Results: Empirical evaluation of leading LLMs reveals significant deficiencies across all ten competencies, indicating that current models lack the reliability required for unsupervised, autonomous GP-level clinical decision support.

Technology Category

Application Category

📝 Abstract
General practitioners (GPs) serve as the cornerstone of primary healthcare systems by providing continuous and comprehensive medical services. However, due to community-oriented nature of their practice, uneven training and resource gaps, the clinical proficiency among GPs can vary significantly across regions and healthcare settings. Currently, Large Language Models (LLMs) have demonstrated great potential in clinical and medical applications, making them a promising tool for supporting general practice. However, most existing benchmarks and evaluation frameworks focus on exam-style assessments-typically multiple-choice question-lack comprehensive assessment sets that accurately mirror the real-world scenarios encountered by GPs. To evaluate how effectively LLMs can make decisions in the daily work of GPs, we designed GPBench, which consists of both test questions from clinical practice and a novel evaluation framework. The test set includes multiple-choice questions that assess fundamental knowledge of general practice, as well as realistic, scenario-based problems. All questions are meticulously annotated by experts, incorporating rich fine-grained information related to clinical management. The proposed LLM evaluation framework is based on the competency model for general practice, providing a comprehensive methodology for assessing LLM performance in real-world settings. As the first large-model evaluation set targeting GP decision-making scenarios, GPBench allows us to evaluate current mainstream LLMs. Expert assessment and evaluation reveal that in areas such as disease staging, complication recognition, treatment detail, and medication usage, these models exhibit at least ten major shortcomings. Overall, existing LLMs are not yet suitable for independent use in real-world GP working scenarios without human oversight.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs as General Practitioners in real-world scenarios
Assessing clinical proficiency gaps among GPs using LLMs
Identifying shortcomings of LLMs in GP decision-making tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

GPBench benchmark for real-world GP scenarios
Fine-grained expert-annotated clinical questions
Competency-based LLM evaluation framework
🔎 Similar Papers
No similar papers found.
Z
Zheqing Li
The Sixth Affiliated Hospital of Sun Yat-sen University
Yiying Yang
Yiying Yang
Fudan university
3D computer visionmachine learning
J
Jiping Lang
The Sixth Affiliated Hospital of Sun Yat-sen University
Wenhao Jiang
Wenhao Jiang
GML, Tencent, PolyU
Computer VisionMachine LearningFoundation Models
Y
Yuhang Zhao
Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ)
S
Shuang Li
The Sixth Affiliated Hospital of Sun Yat-sen University
D
Dingqian Wang
Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ)
Z
Zhu Lin
The Sixth Affiliated Hospital of Sun Yat-sen University
X
Xuanna Li
The Sixth Affiliated Hospital of Sun Yat-sen University
Y
Yuze Tang
The Sixth Affiliated Hospital of Sun Yat-sen University
J
Jiexian Qiu
Xinyi People’s Hospital
X
Xiaolin Lu
Xinyi People’s Hospital
H
Hongji Yu
Xinyi People’s Hospital
S
Shuang Chen
The Sixth Affiliated Hospital of Sun Yat-sen University
Y
Yuhua Bi
The Sixth Affiliated Hospital of Sun Yat-sen University
X
Xiaofei Zeng
The Sixth Affiliated Hospital of Sun Yat-sen University
Y
Yixian Chen
School of Intelligent Systems Engineering, Sun Yat-sen University
J
Junrong Chen
The Sixth Affiliated Hospital of Sun Yat-sen University
L
Lin Yao
The Sixth Affiliated Hospital of Sun Yat-sen University