Explainable Artificial Intelligence in Biomedical Image Analysis: A Comprehensive Survey

📅 2025-07-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing biomedical XAI surveys lack modality-specific perspectives, overlook advances in multimodal learning and vision-language models (VLMs), and provide insufficient practical guidance. To address this, we propose the first imaging-modality-centric taxonomy for XAI in biomedicine, systematically unifying attention mechanisms, gradient-based attribution methods, and counterfactual explanations. Crucially, we integrate multimodal fusion strategies and VLMs—such as CLIP—into the interpretability framework for the first time. Through a systematic literature review, we catalog evaluation metrics and open-source tools, identify cross-modal interpretability disparities, and pinpoint clinical deployment bottlenecks. This work fills a critical gap in modality-aware XAI surveys, delivering a structured knowledge graph and actionable guidelines to advance transparency and trustworthiness in deep learning for medical imaging. (149 words)

Technology Category

Application Category

📝 Abstract
Explainable artificial intelligence (XAI) has become increasingly important in biomedical image analysis to promote transparency, trust, and clinical adoption of DL models. While several surveys have reviewed XAI techniques, they often lack a modality-aware perspective, overlook recent advances in multimodal and vision-language paradigms, and provide limited practical guidance. This survey addresses this gap through a comprehensive and structured synthesis of XAI methods tailored to biomedical image analysis.We systematically categorize XAI methods, analyzing their underlying principles, strengths, and limitations within biomedical contexts. A modality-centered taxonomy is proposed to align XAI methods with specific imaging types, highlighting the distinct interpretability challenges across modalities. We further examine the emerging role of multimodal learning and vision-language models in explainable biomedical AI, a topic largely underexplored in previous work. Our contributions also include a summary of widely used evaluation metrics and open-source frameworks, along with a critical discussion of persistent challenges and future directions. This survey offers a timely and in-depth foundation for advancing interpretable DL in biomedical image analysis.
Problem

Research questions and friction points this paper is trying to address.

Addressing lack of modality-aware XAI in biomedical image analysis
Exploring multimodal and vision-language paradigms for explainable AI
Providing practical guidance on XAI methods and evaluations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modality-centered taxonomy for XAI methods
Multimodal learning in explainable biomedical AI
Evaluation metrics and open-source frameworks
🔎 Similar Papers
No similar papers found.
G
Getamesay Haile Dagnaw
Griffith University, Australia
Yanming Zhu
Yanming Zhu
Harvard University
Neuroscience
M
Muhammad Hassan Maqsood
Griffith University, Australia
Wencheng Yang
Wencheng Yang
University of Southern Queensland
BiometricsPrivacy-Preserving AI
X
Xingshuai Dong
Griffith University, Australia
X
Xuefei Yin
Griffith University, Australia
Alan Wee-Chung Liew
Alan Wee-Chung Liew
Professor, School of ICT, Griffith University
Machine learningmedical imagingcomputer visionensemble learningdata stream learning