🤖 AI Summary
This work exposes critical reliability deficiencies in multimodal large language models (MLLMs) for medical visual question answering (VQA). To address the limitation of existing medical benchmarks—namely, their inability to reveal safety-critical vulnerabilities—the authors introduce MediConfusion, the first medical VQA benchmark explicitly designed to evaluate robustness against visual confusion. It systematically probes failure modes induced by fine-grained visual similarity perturbations among anatomically or pathologically similar medical concepts. Experiments demonstrate that all leading open- and closed-source medical MLLMs achieve accuracy below 50%—at or below chance level—on this benchmark, thereby challenging prevailing evaluation paradigms. Through combined qualitative and quantitative analysis, the study identifies prototypical visual confusion failure patterns, providing foundational diagnostic insights and establishing a new standard for assessing trustworthiness in medical AI systems.
📝 Abstract
Multimodal Large Language Models (MLLMs) have tremendous potential to improve the accuracy, availability, and cost-effectiveness of healthcare by providing automated solutions or serving as aids to medical professionals. Despite promising first steps in developing medical MLLMs in the past few years, their capabilities and limitations are not well-understood. Recently, many benchmark datasets have been proposed that test the general medical knowledge of such models across a variety of medical areas. However, the systematic failure modes and vulnerabilities of such models are severely underexplored with most medical benchmarks failing to expose the shortcomings of existing models in this safety-critical domain. In this paper, we introduce MediConfusion, a challenging medical Visual Question Answering (VQA) benchmark dataset, that probes the failure modes of medical MLLMs from a vision perspective. We reveal that state-of-the-art models are easily confused by image pairs that are otherwise visually dissimilar and clearly distinct for medical experts. Strikingly, all available models (open-source or proprietary) achieve performance below random guessing on MediConfusion, raising serious concerns about the reliability of existing medical MLLMs for healthcare deployment. We also extract common patterns of model failure that may help the design of a new generation of more trustworthy and reliable MLLMs in healthcare.