MediConfusion: Can you trust your AI radiologist? Probing the reliability of multimodal medical foundation models

📅 2024-09-23
🏛️ International Conference on Learning Representations
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This work exposes critical reliability deficiencies in multimodal large language models (MLLMs) for medical visual question answering (VQA). To address the limitation of existing medical benchmarks—namely, their inability to reveal safety-critical vulnerabilities—the authors introduce MediConfusion, the first medical VQA benchmark explicitly designed to evaluate robustness against visual confusion. It systematically probes failure modes induced by fine-grained visual similarity perturbations among anatomically or pathologically similar medical concepts. Experiments demonstrate that all leading open- and closed-source medical MLLMs achieve accuracy below 50%—at or below chance level—on this benchmark, thereby challenging prevailing evaluation paradigms. Through combined qualitative and quantitative analysis, the study identifies prototypical visual confusion failure patterns, providing foundational diagnostic insights and establishing a new standard for assessing trustworthiness in medical AI systems.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) have tremendous potential to improve the accuracy, availability, and cost-effectiveness of healthcare by providing automated solutions or serving as aids to medical professionals. Despite promising first steps in developing medical MLLMs in the past few years, their capabilities and limitations are not well-understood. Recently, many benchmark datasets have been proposed that test the general medical knowledge of such models across a variety of medical areas. However, the systematic failure modes and vulnerabilities of such models are severely underexplored with most medical benchmarks failing to expose the shortcomings of existing models in this safety-critical domain. In this paper, we introduce MediConfusion, a challenging medical Visual Question Answering (VQA) benchmark dataset, that probes the failure modes of medical MLLMs from a vision perspective. We reveal that state-of-the-art models are easily confused by image pairs that are otherwise visually dissimilar and clearly distinct for medical experts. Strikingly, all available models (open-source or proprietary) achieve performance below random guessing on MediConfusion, raising serious concerns about the reliability of existing medical MLLMs for healthcare deployment. We also extract common patterns of model failure that may help the design of a new generation of more trustworthy and reliable MLLMs in healthcare.
Problem

Research questions and friction points this paper is trying to address.

Assessing reliability of medical multimodal AI models
Identifying failure modes in medical visual question answering
Evaluating trustworthiness of AI radiologists in healthcare
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces MediConfusion benchmark for medical VQA
Probes failure modes of medical MLLMs visually
Reveals models perform below random guessing
🔎 Similar Papers
No similar papers found.
Mohammad Shahab Sepehri
Mohammad Shahab Sepehri
PhD student, University of Southern California
ReliabilityEfficiencyVLM
Z
Zalan Fabian
Dept. of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA
M
Maryam Soltanolkotabi
Dept. of Radiology and Imaging Sciences, University of Utah, Salt Lake City, UT
M
M. Soltanolkotabi
Dept. of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA