Structure Causal Models and LLMs Integration in Medical Visual Question Answering.

📅 2025-04-29
🏛️ IEEE Transactions on Medical Imaging
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Implicit confounders between images and questions in Medical Visual Question Answering (MedVQA) induce spurious cross-modal associations, undermining causal interpretability and clinical plausibility of answers. Method: We propose the first Structure Causal Model (SCM) tailored for MedVQA, integrating a novel causal graph that explicitly models vision–language interactions. To mitigate relative confounding effects, we introduce a multivariate resampling-based front-door adjustment method. Additionally, we design a mutual-information-driven spurious correlation detection mechanism and a multimodal medical semantic prompting strategy to jointly optimize the large language model and visual encoder. Contribution/Results: Our approach achieves significant accuracy improvements across three major Med-VQA benchmarks. Crucially, it provides the first empirical validation that model outputs align with ground-truth medical causal relationships—establishing a new paradigm for interpretable, causally grounded medical AI.

Technology Category

Application Category

📝 Abstract
Medical Visual Question Answering (Med-VQA) aims to answer medical questions according to medical images. However, the complexity of medical data leads to confounders that are difficult to observe, so bias between images and questions is inevitable. Such cross-modal bias makes it challenging to infer medically meaningful answers. In this work, we propose a causal inference framework for the MedVQA task, which effectively eliminates the relative confounding effect between the image and the question to ensure the precision of the question-answering (QA) session. We are the first to introduce a novel causal graph structure that represents the interaction between visual and textual elements, explicitly capturing how different questions influence visual features. During optimization, we apply the mutual information to discover spurious correlations and propose a multi-variable resampling front-door adjustment method to eliminate the relative confounding effect, which aims to align features based on their true causal relevance to the question-answering task. In addition, we also introduce a prompt strategy that combines multiple prompt forms to improve the model's ability to understand complex medical data and answer accurately. Extensive experiments on three MedVQA datasets demonstrate that 1) our method significantly improves the accuracy of MedVQA, and 2) our method achieves true causal correlations in the face of complex medical data.
Problem

Research questions and friction points this paper is trying to address.

Addressing cross-modal bias in Medical Visual Question Answering (MedVQA)
Eliminating confounding effects between images and questions in MedVQA
Improving accuracy and causal correlations in medical QA tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Causal inference framework eliminates confounding effects
Multi-variable resampling aligns features causally
Multi-prompt strategy enhances medical data understanding
🔎 Similar Papers
No similar papers found.
Z
Zibo Xu
School of Microelectronics, Tianjin University, Tianjin 300072, China
Q
Qiang Li
School of Microelectronics, Tianjin University, Tianjin 300072, China
Weizhi Nie
Weizhi Nie
Tianjin University
Medical Image ProcessingComputer VisionLLMs
Weijie Wang
Weijie Wang
PhD Student, Zhejiang University
Computer VisionEfficient AIDeep Learning
A
Anan Liu
School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China