🤖 AI Summary
Widely adopted XAI methods (e.g., GradCAM, LRP) in neuroimaging lack rigorous validation and exhibit systematic localization failures, undermining model interpretability and trust.
Method: We introduce the first XAI validation framework grounded in real brain MRI data: constructing prediction tasks with known ground-truth signal sources and establishing a quantifiable explanation benchmark; evaluating gradient-based methods—including SmoothGrad, GradCAM, and LRP—objectively on ~45,000 structurally annotated brain MRI scans without reliance on image perturbations.
Results: SmoothGrad significantly outperforms other methods; domain mismatch—not implementation flaws—is the primary cause of failure for GradCAM and LRP. Crucially, methods making fewer simplifying assumptions (e.g., avoiding heuristic smoothing or layer-specific weighting) demonstrate superior robustness in neuroimaging. This work establishes a new empirical standard and validation paradigm for XAI in medical imaging, providing critical evidence to guide method selection and evaluation rigor in clinical AI applications.
📝 Abstract
Trustworthy interpretation of deep learning models is critical for neuroimaging applications, yet commonly used Explainable AI (XAI) methods lack rigorous validation, risking misinterpretation. We performed the first large-scale, systematic comparison of XAI methods on ~45,000 structural brain MRIs using a novel XAI validation framework. This framework establishes verifiable ground truth by constructing prediction tasks with known signal sources - from localized anatomical features to subject-specific clinical lesions - without artificially altering input images. Our analysis reveals systematic failures in two of the most widely used methods: GradCAM consistently failed to localize predictive features, while Layer-wise Relevance Propagation generated extensive, artifactual explanations that suggest incompatibility with neuroimaging data characteristics. Our results indicate that these failures stem from a domain mismatch, where methods with design principles tailored to natural images require substantial adaptation for neuroimaging data. In contrast, the simpler, gradient-based method SmoothGrad, which makes fewer assumptions about data structure, proved consistently accurate, suggesting its conceptual simplicity makes it more robust to this domain shift. These findings highlight the need for domain-specific adaptation and validation of XAI methods, suggest that interpretations from prior neuroimaging studies using standard XAI methodology warrant re-evaluation, and provide urgent guidance for practical application of XAI in neuroimaging.