🤖 AI Summary
This study addresses the limited clinical adoption of AI-based breast cancer diagnosis due to insufficient model interpretability. We systematically review and empirically evaluate prominent XAI methods—including SHAP, LIME, and Grad-CAM—on multimodal mammographic and ultrasound imaging data. Leveraging CNNs, ResNets, and transfer learning models, we integrate feature visualization and attribution analysis to establish the first clinical adaptability evaluation framework for XAI in breast imaging. Our key contribution is a novel, four-dimensional standardized assessment metric encompassing credibility, stability, clinical consistency, and interactive usability. Experimental results demonstrate that our approach significantly enhances decision transparency, enables radiologists to rapidly verify diagnostic rationale, and improves the clinical understandability and trustworthiness of AI outputs. This work provides both a methodological foundation and a practical paradigm for deploying trustworthy AI in breast cancer early screening and personalized diagnosis.
📝 Abstract
Breast cancer (BC) stands as one of the most common malignancies affecting women worldwide, necessitating advancements in diagnostic methodologies for better clinical outcomes. This article provides a comprehensive exploration of the application of Explainable Artificial Intelligence (XAI) techniques in the detection and diagnosis of breast cancer. As Artificial Intelligence (AI) technologies continue to permeate the healthcare sector, particularly in oncology, the need for transparent and interpretable models becomes imperative to enhance clinical decision-making and patient care. This review discusses the integration of various XAI approaches, such as SHAP, LIME, Grad-CAM, and others, with machine learning and deep learning models utilized in breast cancer detection and classification. By investigating the modalities of breast cancer datasets, including mammograms, ultrasounds and their processing with AI, the paper highlights how XAI can lead to more accurate diagnoses and personalized treatment plans. It also examines the challenges in implementing these techniques and the importance of developing standardized metrics for evaluating XAI's effectiveness in clinical settings. Through detailed analysis and discussion, this article aims to highlight the potential of XAI in bridging the gap between complex AI models and practical healthcare applications, thereby fostering trust and understanding among medical professionals and improving patient outcomes.