🤖 AI Summary
Implicit confounders between images and questions in Medical Visual Question Answering (MedVQA) induce spurious cross-modal associations, undermining causal interpretability and clinical plausibility of answers.
Method: We propose the first Structure Causal Model (SCM) tailored for MedVQA, integrating a novel causal graph that explicitly models vision–language interactions. To mitigate relative confounding effects, we introduce a multivariate resampling-based front-door adjustment method. Additionally, we design a mutual-information-driven spurious correlation detection mechanism and a multimodal medical semantic prompting strategy to jointly optimize the large language model and visual encoder.
Contribution/Results: Our approach achieves significant accuracy improvements across three major Med-VQA benchmarks. Crucially, it provides the first empirical validation that model outputs align with ground-truth medical causal relationships—establishing a new paradigm for interpretable, causally grounded medical AI.
📝 Abstract
Medical Visual Question Answering (Med-VQA) aims to answer medical questions according to medical images. However, the complexity of medical data leads to confounders that are difficult to observe, so bias between images and questions is inevitable. Such cross-modal bias makes it challenging to infer medically meaningful answers. In this work, we propose a causal inference framework for the MedVQA task, which effectively eliminates the relative confounding effect between the image and the question to ensure the precision of the question-answering (QA) session. We are the first to introduce a novel causal graph structure that represents the interaction between visual and textual elements, explicitly capturing how different questions influence visual features. During optimization, we apply the mutual information to discover spurious correlations and propose a multi-variable resampling front-door adjustment method to eliminate the relative confounding effect, which aims to align features based on their true causal relevance to the question-answering task. In addition, we also introduce a prompt strategy that combines multiple prompt forms to improve the model's ability to understand complex medical data and answer accurately. Extensive experiments on three MedVQA datasets demonstrate that 1) our method significantly improves the accuracy of MedVQA, and 2) our method achieves true causal correlations in the face of complex medical data.