3D ReX: Causal Explanations in 3D Neuroimaging Classification

📅 2025-02-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In 3D neuroimaging classification, the lack of interpretability of AI models undermines clinical trust. Method: This paper proposes the first post-hoc explanation framework grounded in actual causality theory, moving beyond conventional attribution methods that merely capture statistical correlations. By modeling the causal responsibility of input voxels for model decisions via causal reasoning—and integrating 3D gradient-constrained optimization with responsibility map generation—the method localizes causally critical brain regions driving classification outcomes. Contribution/Results: The framework is compatible with mainstream segmentation and classification models. Evaluated on stroke detection, it successfully identifies lesion-associated causally sensitive regions, significantly enhancing clinical interpretability and decisional credibility of explanations. It establishes a novel paradigm for causal interpretability research in medical AI.

Technology Category

Application Category

📝 Abstract
Explainability remains a significant problem for AI models in medical imaging, making it challenging for clinicians to trust AI-driven predictions. We introduce 3D ReX, the first causality-based post-hoc explainability tool for 3D models. 3D ReX uses the theory of actual causality to generate responsibility maps which highlight the regions most crucial to the model's decision. We test 3D ReX on a stroke detection model, providing insight into the spatial distribution of features relevant to stroke.
Problem

Research questions and friction points this paper is trying to address.

Explainability in medical imaging AI
Causality-based 3D model explainability
Spatial feature distribution in stroke detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Causality-based explainability tool
Generates responsibility maps
Focuses on 3D neuroimaging classification
🔎 Similar Papers
No similar papers found.
M
Melane Navaratnarajah
King’s College London, UK
S
Sophie A. Martin
University College London, UK
David A. Kelly
David A. Kelly
King's College London
Information TheoryCausalityExplainable AISoftware Engineering
Nathan Blake
Nathan Blake
King's College London, University College London
Medical AIExplainable AIVibrational Spectroscopy
H
Hana Chocker
King’s College London, UK