Leveraging Causal Reasoning Method for Explaining Medical Image Segmentation Models

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited interpretability of current medical image segmentation models, which hinders trust in high-stakes clinical settings. While existing explanation methods are primarily designed for classification tasks and poorly generalize to segmentation, this study pioneers the integration of causal inference into the interpretability analysis of medical image segmentation. By leveraging backpropagation of the average treatment effect (ATE), the proposed approach quantifies the causal influence of input image regions and internal network components on the segmentation output. Evaluated on two representative medical imaging datasets, the method outperforms existing explanation techniques, yielding more faithful and fine-grained interpretations. Furthermore, it uncovers heterogeneity in perceptual strategies—both across different models and within the same model under varying inputs—offering novel insights for model diagnosis and refinement.

Technology Category

Application Category

📝 Abstract
Medical image segmentation plays a vital role in clinical decision-making, enabling precise localization of lesions and guiding interventions. Despite significant advances in segmentation accuracy, the black-box nature of most deep models has raised growing concerns about their trustworthiness in high-stakes medical scenarios. Current explanation techniques have primarily focused on classification tasks, leaving the segmentation domain relatively underexplored. We introduced an explanation model for segmentation task which employs the causal inference framework and backpropagates the average treatment effect (ATE) into a quantification metric to determine the influence of input regions, as well as network components, on target segmentation areas. Through comparison with recent segmentation explainability techniques on two representative medical imaging datasets, we demonstrated that our approach provides more faithful explanations than existing approaches. Furthermore, we carried out a systematic causal analysis of multiple foundational segmentation models using our method, which reveals significant heterogeneity in perceptual strategies across different models, and even between different inputs for the same model. Suggesting the potential of our method to provide notable insights for optimizing segmentation models. Our code can be found at https://github.com/lcmmai/PdCR.
Problem

Research questions and friction points this paper is trying to address.

medical image segmentation
model interpretability
black-box models
explainability
causal reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

causal inference
medical image segmentation
explainable AI
average treatment effect
model interpretability
🔎 Similar Papers
No similar papers found.
L
Limai Jiang
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences; University of Chinese Academy of Sciences
Ruitao Xie
Ruitao Xie
Shenzhen University
mobile computingcloud computingwireless sensor networks
B
Bokai Yang
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences; Shenzhen Polytechnic University
H
Huazhen Huang
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
J
Juan He
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences; University of Macau
Y
Yufu Huo
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences; University of Chinese Academy of Sciences
Z
Zikai Wang
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences; University of Chinese Academy of Sciences
Yang Wei
Yang Wei
Chongqing University of Posts and Telecommunications
adversarial attackimage forgery detectionimage processing
Y
Yunpeng Cai
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences