MEDSYN: Benchmarking Multi-EviDence SYNthesis in Complex Clinical Cases for Multimodal Large Language Models

📅 2026-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing benchmarks for medical multimodal large language models inadequately capture the complexity of integrating heterogeneous, multi-source clinical evidence as encountered in real-world practice, particularly exhibiting significant limitations in final diagnostic decision-making. To address this gap, this work introduces MEDSYN, a multilingual, multimodal benchmark that integrates up to seven types of visual and textual clinical evidence per case, emulating authentic clinical workflows to systematically evaluate model performance in both differential diagnosis generation and final diagnosis selection. The study reveals, for the first time, a pronounced performance gap between these two tasks, proposes an “evidence sensitivity” metric to quantify cross-modal evidence utilization efficiency, and demonstrates its positive correlation with diagnostic accuracy. Experiments show that while state-of-the-art models match human experts in generating differentials, they fall short in final diagnosis; interventions guided by evidence sensitivity effectively enhance diagnostic performance.

Technology Category

Application Category

📝 Abstract
Multimodal large language models (MLLMs) have shown great potential in medical applications, yet existing benchmarks inadequately capture real-world clinical complexity. We introduce MEDSYN, a multilingual, multimodal benchmark of highly complex clinical cases with up to 7 distinct visual clinical evidence (CE) types per case. Mirroring clinical workflow, we evaluate 18 MLLMs on differential diagnosis (DDx) generation and final diagnosis (FDx) selection. While top models often match or even outperform human experts on DDx generation, all MLLMs exhibit a much larger DDx--FDx performance gap compared to expert clinicians, indicating a failure mode in synthesis of heterogeneous CE types. Ablations attribute this failure to (i) overreliance on less discriminative textual CE ($\it{e.g.}$, medical history) and (ii) a cross-modal CE utilization gap. We introduce Evidence Sensitivity to quantify the latter and show that a smaller gap correlates with higher diagnostic accuracy. Finally, we demonstrate how it can be used to guide interventions to improve model performance. We will open-source our benchmark and code.
Problem

Research questions and friction points this paper is trying to address.

multimodal large language models
clinical evidence synthesis
diagnostic reasoning
medical benchmarking
heterogeneous evidence integration
Innovation

Methods, ideas, or system contributions that make the work stand out.

multimodal large language models
clinical evidence synthesis
evidence sensitivity
diagnostic reasoning
medical benchmark
🔎 Similar Papers
No similar papers found.