ER-REASON: A Benchmark Dataset for LLM-Based Clinical Reasoning in the Emergency Room

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing medical QA benchmarks rely heavily on standardized licensing exam questions, failing to capture the full clinical decision-making workflow—triage, diagnosis, treatment, and outcome prediction—and incur high costs for expert annotation. Method: To address the urgent, interdisciplinary, and highly uncertain reasoning demands of emergency medicine, we introduce the first LLM evaluation benchmark grounded in real-world emergency department (ED) practice. It is built upon 3,984 de-identified longitudinal patient records (25,174 documents), systematically modeling end-to-end ED clinical reasoning tasks. We innovatively incorporate 72 physician-authored pedagogical reasoning chains and rule-based differential diagnosis structures, filling a critical gap in high-quality clinical rationale data. A multi-source medical record alignment and structured reasoning task framework enables zero-shot and few-shot evaluation. Results: Experiments reveal that current state-of-the-art LLMs significantly underperform clinicians in dynamic evidence integration and uncertainty management.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have been extensively evaluated on medical question answering tasks based on licensing exams. However, real-world evaluations often depend on costly human annotators, and existing benchmarks tend to focus on isolated tasks that rarely capture the clinical reasoning or full workflow underlying medical decisions. In this paper, we introduce ER-Reason, a benchmark designed to evaluate LLM-based clinical reasoning and decision-making in the emergency room (ER)--a high-stakes setting where clinicians make rapid, consequential decisions across diverse patient presentations and medical specialties under time pressure. ER-Reason includes data from 3,984 patients, encompassing 25,174 de-identified longitudinal clinical notes spanning discharge summaries, progress notes, history and physical exams, consults, echocardiography reports, imaging notes, and ER provider documentation. The benchmark includes evaluation tasks that span key stages of the ER workflow: triage intake, initial assessment, treatment selection, disposition planning, and final diagnosis--each structured to reflect core clinical reasoning processes such as differential diagnosis via rule-out reasoning. We also collected 72 full physician-authored rationales explaining reasoning processes that mimic the teaching process used in residency training, and are typically absent from ER documentation. Evaluations of state-of-the-art LLMs on ER-Reason reveal a gap between LLM-generated and clinician-authored clinical reasoning for ER decisions, highlighting the need for future research to bridge this divide.
Problem

Research questions and friction points this paper is trying to address.

Evaluates LLM clinical reasoning in emergency room settings
Addresses gap in real-world clinical workflow benchmarks
Assesses LLM decision-making under time pressure and diversity
Innovation

Methods, ideas, or system contributions that make the work stand out.

ER-Reason benchmark for clinical reasoning evaluation
Includes 3,984 patient longitudinal clinical notes
72 physician-authored rationales mimic residency training
🔎 Similar Papers
No similar papers found.
Nikita Mehandru
Nikita Mehandru
Ph.D. Student at University of California, Berkeley
Machine Learning in MedicineClinical NLPStatistics
N
Niloufar Golchini
University of California, Berkeley
David Bamman
David Bamman
UC Berkeley
Natural Language ProcessingMachine LearningDigital HumanitiesComputational Social Science
T
Travis Zack
University of California, San Francisco
M
Melanie F. Molina
University of California, San Francisco
A
Ahmed Alaa
University of California, Berkeley