🤖 AI Summary
Existing medical visual question answering (VQA) robustness evaluations rely heavily on synthetic distribution shifts, coarse-grained semantic answer matching, and lack interpretable, image-agnostic baselines—failing to reflect model reliability under real-world distribution shifts. To address this, we propose SURE-VQA: the first systematic robustness evaluation paradigm tailored for medical VQA. Its core contributions are: (1) a benchmark capturing realistic distribution shifts—including cross-device, multi-center, and modality-missing scenarios; (2) an LLM-driven fine-grained semantic answer matching metric; and (3) three image-agnostic sanity baselines to disentangle visual contribution from language priors. We systematically evaluate mainstream parameter-efficient fine-tuning (PEFT) methods across three medical VQA datasets. Results reveal: (i) full fine-tuning consistently achieves superior robustness; (ii) LoRA offers the best trade-off between accuracy and robustness; and (iii) image-agnostic baselines outperform expectations—indicating current PEFT strategies exhibit high sensitivity to distribution shifts and insufficient generalization.
📝 Abstract
Vision-Language Models (VLMs) have great potential in medical tasks, like Visual Question Answering (VQA), where they could act as interactive assistants for both patients and clinicians. Yet their robustness to distribution shifts on unseen data remains a critical concern for safe deployment. Evaluating such robustness requires a controlled experimental setup that allows for systematic insights into the model's behavior. However, we demonstrate that current setups fail to offer sufficiently thorough evaluations, limiting their ability to accurately assess model robustness. To address this gap, our work introduces a novel framework, called SURE-VQA, centered around three key requirements to overcome the current pitfalls and systematically analyze the robustness of VLMs: 1) Since robustness on synthetic shifts does not necessarily translate to real-world shifts, robustness should be measured on real-world shifts that are inherent to the VQA data; 2) Traditional token-matching metrics often fail to capture underlying semantics, necessitating the use of large language models (LLMs) for more accurate semantic evaluation; 3) Model performance often lacks interpretability due to missing sanity baselines, thus meaningful baselines should be reported that allow assessing the multimodal impact on the VLM. To demonstrate the relevance of this framework, we conduct a study on the robustness of various fine-tuning methods across three medical datasets with four different types of distribution shifts. Our study reveals several important findings: 1) Sanity baselines that do not utilize image data can perform surprisingly well; 2) We confirm LoRA as the best-performing PEFT method; 3) No PEFT method consistently outperforms others in terms of robustness to shifts. Code is provided at https://github.com/IML-DKFZ/sure-vqa.