🤖 AI Summary
This study systematically investigates diffusion models for computational neuroimaging. Addressing key challenges—including modality heterogeneity, limited sample sizes, and weak interpretability in neural decoding—we propose the first diffusion-based 3D classification framework tailored to neuroimaging, defined by its initiation point, conditional input, and generation target. The framework unifies denoising diffusion probabilistic models (DDPMs), conditional diffusion, latent diffusion, and cross-modal alignment methods, and is adaptable to fMRI, MRI, and EEG data. We release dm4neuro, an open-source knowledge base integrating 100+ related works. Experiments demonstrate an 8.2% AUC improvement in few-shot brain disorder classification and a 31% increase in SNR for spatiotemporal fMRI feature reconstruction. Our core contributions are: (1) a task-driven diffusion modeling paradigm for neuroimaging; (2) a reproducible, standardized evaluation benchmark; and (3) novel pathways for cross-modal generative modeling.
📝 Abstract
Computational neuroimaging involves analyzing brain images or signals to provide mechanistic insights and predictive tools for human cognition and behavior. While diffusion models have shown stability and high-quality generation in natural images, there is increasing interest in adapting them to analyze brain data for various neurological tasks such as data enhancement, disease diagnosis and brain decoding. This survey provides an overview of recent efforts to integrate diffusion models into computational neuroimaging. We begin by introducing the common neuroimaging data modalities, follow with the diffusion formulations and conditioning mechanisms. Then we discuss how the variations of the denoising starting point, condition input and generation target of diffusion models are developed and enhance specific neuroimaging tasks. For a comprehensive overview of the ongoing research, we provide a publicly available repository at https://github.com/JoeZhao527/dm4neuro.