🤖 AI Summary
This study addresses the limited interpretability and low clinical trustworthiness of AI models in prostate cancer diagnosis from MRI. To this end, we propose a novel multi-attribute interpretable generation framework. Methodologically, we introduce the first medical-image-oriented multi-attribute disentangled explanation paradigm; design a feature-pyramid-enhanced encoder to optimize multi-scale latent representations; and develop a joint generative-classification model based on an improved GAN, integrating projection-based explanation modeling with end-to-end training. Experiments on prostate cancer MRI data demonstrate that our framework significantly improves explanation fidelity (+12.3%) and clinical consistency (91.7% expert acceptance rate). The generated explanations explicitly disentangle lesion location, morphology, and signal intensity—enabling physician interactive verification and collaborative decision-making. This work establishes a new paradigm for trustworthy AI-assisted diagnosis in radiology.
📝 Abstract
Prostate cancer, a growing global health concern, necessitates precise diagnostic tools, with Magnetic Resonance Imaging (MRI) offering high-resolution soft tissue imaging that significantly enhances diagnostic accuracy. Recent advancements in explainable AI and representation learning have significantly improved prostate cancer diagnosis by enabling automated and precise lesion classification. However, existing explainable AI methods, particularly those based on frameworks like generative adversarial networks (GANs), are predominantly developed for natural image generation, and their application to medical imaging often leads to suboptimal performance due to the unique characteristics and complexity of medical image. To address these challenges, our paper introduces three key contributions. First, we propose ProjectedEx, a generative framework that provides interpretable, multi-attribute explanations, effectively linking medical image features to classifier decisions. Second, we enhance the encoder module by incorporating feature pyramids, which enables multiscale feedback to refine the latent space and improves the quality of generated explanations. Additionally, we conduct comprehensive experiments on both the generator and classifier, demonstrating the clinical relevance and effectiveness of ProjectedEx in enhancing interpretability and supporting the adoption of AI in medical settings. Code will be released at https://github.com/Richardqiyi/ProjectedEx