SURE-VQA: Systematic Understanding of Robustness Evaluation in Medical VQA Tasks

📅 2024-11-29
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing medical visual question answering (VQA) robustness evaluations rely heavily on synthetic distribution shifts, coarse-grained semantic answer matching, and lack interpretable, image-agnostic baselines—failing to reflect model reliability under real-world distribution shifts. To address this, we propose SURE-VQA: the first systematic robustness evaluation paradigm tailored for medical VQA. Its core contributions are: (1) a benchmark capturing realistic distribution shifts—including cross-device, multi-center, and modality-missing scenarios; (2) an LLM-driven fine-grained semantic answer matching metric; and (3) three image-agnostic sanity baselines to disentangle visual contribution from language priors. We systematically evaluate mainstream parameter-efficient fine-tuning (PEFT) methods across three medical VQA datasets. Results reveal: (i) full fine-tuning consistently achieves superior robustness; (ii) LoRA offers the best trade-off between accuracy and robustness; and (iii) image-agnostic baselines outperform expectations—indicating current PEFT strategies exhibit high sensitivity to distribution shifts and insufficient generalization.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs) have great potential in medical tasks, like Visual Question Answering (VQA), where they could act as interactive assistants for both patients and clinicians. Yet their robustness to distribution shifts on unseen data remains a critical concern for safe deployment. Evaluating such robustness requires a controlled experimental setup that allows for systematic insights into the model's behavior. However, we demonstrate that current setups fail to offer sufficiently thorough evaluations, limiting their ability to accurately assess model robustness. To address this gap, our work introduces a novel framework, called SURE-VQA, centered around three key requirements to overcome the current pitfalls and systematically analyze the robustness of VLMs: 1) Since robustness on synthetic shifts does not necessarily translate to real-world shifts, robustness should be measured on real-world shifts that are inherent to the VQA data; 2) Traditional token-matching metrics often fail to capture underlying semantics, necessitating the use of large language models (LLMs) for more accurate semantic evaluation; 3) Model performance often lacks interpretability due to missing sanity baselines, thus meaningful baselines should be reported that allow assessing the multimodal impact on the VLM. To demonstrate the relevance of this framework, we conduct a study on the robustness of various fine-tuning methods across three medical datasets with four different types of distribution shifts. Our study reveals several important findings: 1) Sanity baselines that do not utilize image data can perform surprisingly well; 2) We confirm LoRA as the best-performing PEFT method; 3) No PEFT method consistently outperforms others in terms of robustness to shifts. Code is provided at https://github.com/IML-DKFZ/sure-vqa.
Problem

Research questions and friction points this paper is trying to address.

Assessing VLM robustness to real-world distribution shifts in medical VQA
Improving semantic evaluation accuracy using LLMs in medical VQA
Enhancing interpretability with meaningful baselines in medical VQA robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Real-world distribution shifts for robustness evaluation
LLMs for accurate semantic evaluation
Sanity baselines for interpretable performance assessment
🔎 Similar Papers
No similar papers found.
K
Kim-Celine Kahl
German Cancer Research Center (DKFZ) Heidelberg, Interactive Machine Learning Group, Germany; Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany; Faculty of Mathematics and Computer Science, University of Heidelberg, Germany
S
Selen Erkan
German Cancer Research Center (DKFZ) Heidelberg, Interactive Machine Learning Group, Germany; Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany
J
Jeremias Traub
German Cancer Research Center (DKFZ) Heidelberg, Interactive Machine Learning Group, Germany; Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany
Carsten T. Lüth
Carsten T. Lüth
PhD Student @ Interactive Machine Learning Research Group
Label Efficient Training of Deep Learning Models
K
Klaus Maier-Hein
Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany; Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Germany; National Center for Tumor Diseases (NCT) Heidelberg, Germany
L
Lena Maier-Hein
Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany; German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems (IMSY), Germany; National Center for Tumor Diseases (NCT) Heidelberg, Germany
Paul F. Jaeger
Paul F. Jaeger
Research Scientist at Google DeepMind