Deep Generative Modeling with Spatial and Network Images: An Explainable AI (XAI) Approach

๐Ÿ“… 2025-05-19
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenge of amplitude of low-frequency fluctuations (ALFF) modeling from resting-state fMRI in the Adolescent Brain Cognitive Development (ABCD) Study. We propose the first end-to-end interpretable generative framework that jointly models cortical structural features and multi-task co-activation networks to synthesize and infer ALFF spatial maps. Methodologically, we explicitly integrate multi-scale spatialโ€“network interactions, subject-level heterogeneity, and spatial smoothness; incorporate network encoding, spatial regularization, and Monte Carlo Dropout to enable principled uncertainty quantification; and directly process high-resolution data without preprocessing. On the ABCD dataset, our model reveals statistically significant whole-brain associations between cortical morphology and ALFF. It achieves prediction accuracy competitive with state-of-the-art methods, delivers more reliable uncertainty estimates, and substantially improves computational efficiency.

Technology Category

Application Category

๐Ÿ“ Abstract
This article addresses the challenge of modeling the amplitude of spatially indexed low frequency fluctuations (ALFF) in resting state functional MRI as a function of cortical structural features and a multi-task coactivation network in the Adolescent Brain Cognitive Development (ABCD) Study. It proposes a generative model that integrates effects of spatially-varying inputs and a network-valued input using deep neural networks to capture complex non-linear and spatial associations with the output. The method models spatial smoothness, accounts for subject heterogeneity and complex associations between network and spatial images at different scales, enables accurate inference of each images effect on the output image, and allows prediction with uncertainty quantification via Monte Carlo dropout, contributing to one of the first Explainable AI (XAI) frameworks for heterogeneous imaging data. The model is highly scalable to high-resolution data without the heavy pre-processing or summarization often required by Bayesian methods. Empirical results demonstrate its strong performance compared to existing statistical and deep learning methods. We applied the XAI model to the ABCD data which revealed associations between cortical features and ALFF throughout the entire brain. Our model performed comparably to existing methods in predictive accuracy but provided superior uncertainty quantification and faster computation, demonstrating its effectiveness for large-scale neuroimaging analysis. Open-source software in Python for XAI is available.
Problem

Research questions and friction points this paper is trying to address.

Modeling ALFF in fMRI using cortical features and coactivation networks
Integrating spatial and network inputs via deep neural networks
Providing explainable AI for heterogeneous neuroimaging data analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep neural networks integrate spatial and network inputs
Monte Carlo dropout enables uncertainty quantification
Scalable to high-resolution data without heavy pre-processing