🤖 AI Summary
Existing medical QA benchmarks rely heavily on standardized licensing exam questions, failing to capture the full clinical decision-making workflow—triage, diagnosis, treatment, and outcome prediction—and incur high costs for expert annotation. Method: To address the urgent, interdisciplinary, and highly uncertain reasoning demands of emergency medicine, we introduce the first LLM evaluation benchmark grounded in real-world emergency department (ED) practice. It is built upon 3,984 de-identified longitudinal patient records (25,174 documents), systematically modeling end-to-end ED clinical reasoning tasks. We innovatively incorporate 72 physician-authored pedagogical reasoning chains and rule-based differential diagnosis structures, filling a critical gap in high-quality clinical rationale data. A multi-source medical record alignment and structured reasoning task framework enables zero-shot and few-shot evaluation. Results: Experiments reveal that current state-of-the-art LLMs significantly underperform clinicians in dynamic evidence integration and uncertainty management.
📝 Abstract
Large language models (LLMs) have been extensively evaluated on medical question answering tasks based on licensing exams. However, real-world evaluations often depend on costly human annotators, and existing benchmarks tend to focus on isolated tasks that rarely capture the clinical reasoning or full workflow underlying medical decisions. In this paper, we introduce ER-Reason, a benchmark designed to evaluate LLM-based clinical reasoning and decision-making in the emergency room (ER)--a high-stakes setting where clinicians make rapid, consequential decisions across diverse patient presentations and medical specialties under time pressure. ER-Reason includes data from 3,984 patients, encompassing 25,174 de-identified longitudinal clinical notes spanning discharge summaries, progress notes, history and physical exams, consults, echocardiography reports, imaging notes, and ER provider documentation. The benchmark includes evaluation tasks that span key stages of the ER workflow: triage intake, initial assessment, treatment selection, disposition planning, and final diagnosis--each structured to reflect core clinical reasoning processes such as differential diagnosis via rule-out reasoning. We also collected 72 full physician-authored rationales explaining reasoning processes that mimic the teaching process used in residency training, and are typically absent from ER documentation. Evaluations of state-of-the-art LLMs on ER-Reason reveal a gap between LLM-generated and clinician-authored clinical reasoning for ER decisions, highlighting the need for future research to bridge this divide.