ECG-Reasoning-Benchmark: A Benchmark for Evaluating Clinical Reasoning Capabilities in ECG Interpretation

πŸ“… 2026-03-15
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses the unresolved question of whether multimodal large language models (MLLMs) possess genuine stepwise clinical reasoning capabilities in electrocardiogram (ECG) interpretation or merely rely on superficial visual cues. To investigate this, the authors introduce the first multi-turn evaluation benchmark specifically designed for ECG clinical reasoning, encompassing 17 core diagnostic tasks and over 6,400 samples. Through multi-turn dialog-based reasoning tests, alignment with clinical standards, and visual evidence grounding analysis, the work systematically assesses the completeness of models’ reasoning chains and their ability to link conclusions to actual ECG evidence. Results reveal that leading MLLMs achieve a mere 6% success rate in constructing complete reasoning chains; while they can retrieve relevant medical knowledge, they consistently fail to anchor diagnostic conclusions to the underlying visual features of the ECG signals, exposing a fundamental limitation in clinical reasoning.

Technology Category

Application Category

πŸ“ Abstract
While Multimodal Large Language Models (MLLMs) show promising performance in automated electrocardiogram interpretation, it remains unclear whether they genuinely perform actual step-by-step reasoning or just rely on superficial visual cues. To investigate this, we introduce \textbf{ECG-Reasoning-Benchmark}, a novel multi-turn evaluation framework comprising over 6,400 samples to systematically assess step-by-step reasoning across 17 core ECG diagnoses. Our comprehensive evaluation of state-of-the-art models reveals a critical failure in executing multi-step logical deduction. Although models possess the medical knowledge to retrieve clinical criteria for a diagnosis, they exhibit near-zero success rates (6% Completion) in maintaining a complete reasoning chain, primarily failing to ground the corresponding ECG findings to the actual visual evidence in the ECG signal. These results demonstrate that current MLLMs bypass actual visual interpretation, exposing a critical flaw in existing training paradigms and underscoring the necessity for robust, reasoning-centric medical AI. The code and data are available at https://github.com/Jwoo5/ecg-reasoning-benchmark.
Problem

Research questions and friction points this paper is trying to address.

ECG interpretation
clinical reasoning
multimodal large language models
reasoning benchmark
visual grounding
Innovation

Methods, ideas, or system contributions that make the work stand out.

ECG reasoning
multimodal large language models
clinical reasoning benchmark
step-by-step deduction
visual grounding
πŸ”Ž Similar Papers
No similar papers found.
Jungwoo Oh
Jungwoo Oh
KAIST
Machine LearningHealthcareECG
H
Hyunseung Chung
KAIST
Junhee Lee
Junhee Lee
Electronics and Telecommunications Research Institute
Modeling & SimulationEdge ComputingInfrastructure as CodeIaC
M
Min-Gyu Kim
Ajou University School of Medicine
H
Hangyul Yoon
KAIST
K
Ki Seong Lee
KAIST
Y
Youngchae Lee
Yonsei University College of Medicine
M
Muhan Yeo
Seoul National University Bundang Hospital
Edward Choi
Edward Choi
KAIST
Machine LearningArtificial IntelligenceHealthcare