🤖 AI Summary
Existing approaches struggle to effectively evaluate the clinical reasoning generated by multimodal models on ECG signals, either relying on non-scalable manual review or employing proxy metrics that fail to capture logical and semantic correctness. This work proposes the first reproducible, automated evaluation framework that decouples ECG reasoning into two dimensions: perception—identifying signal patterns—and deduction—applying medical knowledge for logical inference. The perception component is assessed via agent-driven code generation to verify accuracy, while the deduction component is evaluated through retrieval-augmented alignment with a structured clinical knowledge base to ensure logical consistency. This dual-path mechanism enables fine-grained, scalable, and fully automated objective assessment, significantly outperforming conventional QA-based or human-centric methods in semantic fidelity, efficiency, and reproducibility.
📝 Abstract
While multimodal large language models offer a promising solution to the "black box" nature of health AI by generating interpretable reasoning traces, verifying the validity of these traces remains a critical challenge. Existing evaluation methods are either unscalable, relying on manual clinician review, or superficial, utilizing proxy metrics (e.g. QA) that fail to capture the semantic correctness of clinical logic. In this work, we introduce a reproducible framework for evaluating reasoning in ECG signals. We propose decomposing reasoning into two distinct, components: (i) Perception, the accurate identification of patterns within the raw signal, and (ii) Deduction, the logical application of domain knowledge to those patterns. To evaluate Perception, we employ an agentic framework that generates code to empirically verify the temporal structures described in the reasoning trace. To evaluate Deduction, we measure the alignment of the model's logic against a structured database of established clinical criteria in a retrieval-based approach. This dual-verification method enables the scalable assessment of "true" reasoning capabilities.