MedCaseReasoning: Evaluating and learning diagnostic reasoning from clinical case reports

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing medical benchmarks (e.g., MedQA, MMLU) evaluate only the diagnostic accuracy of large language models (LLMs), neglecting the faithfulness and interpretability of clinical reasoning processes. Method: We introduce the first open-source clinical diagnostic reasoning evaluation dataset—comprising 14,489 real-world medical cases and expert-annotated multi-step reasoning chains—and pioneer process-oriented evaluation by incorporating reasoning statement recall as a core metric. We design a structured reasoning annotation framework, a multi-step prompting evaluation strategy, and a case-based reasoning trajectory supervision fine-tuning (SFT) paradigm compatible with mainstream open-source reasoning models (e.g., DeepSeek-R1). Contribution/Results: Fine-tuned models achieve +29% diagnostic accuracy and +41% reasoning statement recall. DeepSeek-R1 attains 48% accuracy and 64% reasoning recall under 10-shot settings, demonstrating substantially improved clinical alignment.

Technology Category

Application Category

📝 Abstract
Doctors and patients alike increasingly use Large Language Models (LLMs) to diagnose clinical cases. However, unlike domains such as math or coding, where correctness can be objectively defined by the final answer, medical diagnosis requires both the outcome and the reasoning process to be accurate. Currently, widely used medical benchmarks like MedQA and MMLU assess only accuracy in the final answer, overlooking the quality and faithfulness of the clinical reasoning process. To address this limitation, we introduce MedCaseReasoning, the first open-access dataset for evaluating LLMs on their ability to align with clinician-authored diagnostic reasoning. The dataset includes 14,489 diagnostic question-and-answer cases, each paired with detailed reasoning statements derived from open-access medical case reports. We evaluate state-of-the-art reasoning LLMs on MedCaseReasoning and find significant shortcomings in their diagnoses and reasoning: for instance, the top-performing open-source model, DeepSeek-R1, achieves only 48% 10-shot diagnostic accuracy and mentions only 64% of the clinician reasoning statements (recall). However, we demonstrate that fine-tuning LLMs on the reasoning traces derived from MedCaseReasoning significantly improves diagnostic accuracy and clinical reasoning recall by an average relative gain of 29% and 41%, respectively. The open-source dataset, code, and models are available at https://github.com/kevinwu23/Stanford-MedCaseReasoning.
Problem

Research questions and friction points this paper is trying to address.

Assessing LLMs' diagnostic reasoning quality in medicine
Lack of benchmarks for clinical reasoning faithfulness
Improving LLM diagnostic accuracy via fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces MedCaseReasoning dataset for clinical reasoning evaluation
Evaluates LLMs on diagnostic accuracy and reasoning alignment
Fine-tuning LLMs improves diagnostic accuracy and reasoning recall
🔎 Similar Papers
No similar papers found.