Going beyond explainability in multi-modal stroke outcome prediction models

📅 2025-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal models integrating neuroimaging and tabular clinical data lack interpretability in stroke prognosis prediction. Method: We propose a partially interpretable multimodal deep transformer model (dTMs) that fuses brain MRI with clinical tabular data to predict 3-month functional outcomes. To enhance transparency, we adapt Grad-CAM and occlusion analysis to the dTMs architecture to generate image attribution maps, and integrate similarity-based analysis to uncover pathophysiological associations. Contribution/Results: The model achieves an AUC of 0.8. Tabular feature importance analysis identifies pre-stroke functional independence and admission NIHSS score as the most predictive clinical variables. Attribution maps accurately localize critical brain regions—including the frontal lobe—and reveal interpretable neuroimaging patterns significantly associated with older age and poor outcomes. These findings support false-positive identification and enable hypothesis generation for novel imaging biomarkers.

Technology Category

Application Category

📝 Abstract
Aim: This study aims to enhance interpretability and explainability of multi-modal prediction models integrating imaging and tabular patient data. Methods: We adapt the xAI methods Grad-CAM and Occlusion to multi-modal, partly interpretable deep transformation models (dTMs). DTMs combine statistical and deep learning approaches to simultaneously achieve state-of-the-art prediction performance and interpretable parameter estimates, such as odds ratios for tabular features. Based on brain imaging and tabular data from 407 stroke patients, we trained dTMs to predict functional outcome three months after stroke. We evaluated the models using different discriminatory metrics. The adapted xAI methods were used to generated explanation maps for identification of relevant image features and error analysis. Results: The dTMs achieve state-of-the-art prediction performance, with area under the curve (AUC) values close to 0.8. The most important tabular predictors of functional outcome are functional independence before stroke and NIHSS on admission, a neurological score indicating stroke severity. Explanation maps calculated from brain imaging dTMs for functional outcome highlighted critical brain regions such as the frontal lobe, which is known to be linked to age which in turn increases the risk for unfavorable outcomes. Similarity plots of the explanation maps revealed distinct patterns which give insight into stroke pathophysiology, support developing novel predictors of stroke outcome and enable to identify false predictions. Conclusion: By adapting methods for explanation maps to dTMs, we enhanced the explainability of multi-modal and partly interpretable prediction models. The resulting explanation maps facilitate error analysis and support hypothesis generation regarding the significance of specific image regions in outcome prediction.
Problem

Research questions and friction points this paper is trying to address.

Enhance interpretability of multi-modal stroke prediction models
Identify key brain regions and tabular predictors for stroke outcomes
Improve error analysis and hypothesis generation using explanation maps
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapt Grad-CAM and Occlusion for multi-modal dTMs
Combine statistical and deep learning in dTMs
Generate explanation maps for error analysis
🔎 Similar Papers
No similar papers found.
J
Jonas Brandli
Zurich University of Applied Sciences, Institute of Data Analysis and Process Design (IDP)
M
Maurice Schneeberger
University of Zurich, Epidemiology, Biostatistics and Prevention Institute (EBPI)
L
L. Herzog
University Hospital Zurich, Department of Neurology; University of Zurich, Faculty of Medicine
L
Loran Avci
Zurich University of Applied Sciences, Institute of Data Analysis and Process Design (IDP)
N
Nordin Dari
Zurich University of Applied Sciences, Institute of Data Analysis and Process Design (IDP)
M
Martin Haansel
University of Zurich, Epidemiology, Biostatistics and Prevention Institute (EBPI); University Hospital Zurich, Department of Neurology
H
Hakim Baazaoui
University Hospital Zurich, Department of Neurology
P
Pascal Buhler
Zurich University of Applied Sciences, Institute of Data Analysis and Process Design (IDP)
Susanne Wegener
Susanne Wegener
Neurology; University Hospital Zurich and University of Zurich
Stroke
Beate Sick
Beate Sick
ZHAW, UZH
deep learningstatisticscausalitymedical research