MedGUIDE: Benchmarking Clinical Decision-Making in Large Language Models

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the reliability of large language models (LLMs) in adhering to structured clinical guidelines. To address the lack of standardized evaluation, we introduce MedGUIDE—the first benchmark explicitly designed for guideline adherence—comprising 7,747 high-quality, multiple-choice diagnostic questions derived from 55 NCCN cancer decision trees. Methodologically, we propose a guideline-driven evaluation paradigm, a two-stage expert–LLM collaborative quality filtering mechanism, and integrate context-aware guideline injection, domain-adaptive continual pretraining, and LLM-as-a-judge multi-dimensional assessment. We systematically evaluate 25 models—including general-purpose, open-source, and medical-specialized LLMs—and find that even domain-tuned models achieve less than 60% average accuracy on guideline-consistency tasks. These results reveal systemic deficiencies in current medical LLMs’ capacity for structured clinical reasoning, highlighting critical gaps for safe clinical deployment and providing actionable directions for improvement.

Technology Category

Application Category

📝 Abstract
Clinical guidelines, typically structured as decision trees, are central to evidence-based medical practice and critical for ensuring safe and accurate diagnostic decision-making. However, it remains unclear whether Large Language Models (LLMs) can reliably follow such structured protocols. In this work, we introduce MedGUIDE, a new benchmark for evaluating LLMs on their ability to make guideline-consistent clinical decisions. MedGUIDE is constructed from 55 curated NCCN decision trees across 17 cancer types and uses clinical scenarios generated by LLMs to create a large pool of multiple-choice diagnostic questions. We apply a two-stage quality selection process, combining expert-labeled reward models and LLM-as-a-judge ensembles across ten clinical and linguistic criteria, to select 7,747 high-quality samples. We evaluate 25 LLMs spanning general-purpose, open-source, and medically specialized models, and find that even domain-specific LLMs often underperform on tasks requiring structured guideline adherence. We also test whether performance can be improved via in-context guideline inclusion or continued pretraining. Our findings underscore the importance of MedGUIDE in assessing whether LLMs can operate safely within the procedural frameworks expected in real-world clinical settings.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' ability to follow clinical guideline decision trees
Assessing guideline-consistent clinical decision-making in 25 LLMs
Testing improvements via in-context guidelines or continued pretraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Benchmarking LLMs with MedGUIDE clinical decision trees
Two-stage quality selection using expert and LLM judges
Evaluating guideline adherence via in-context and pretraining
🔎 Similar Papers
No similar papers found.