🤖 AI Summary
Alzheimer’s disease (AD) exhibits subtle early symptoms, and multimodal neuroimaging (e.g., MRI/PET) integration enhances diagnostic sensitivity—yet clinical practice frequently suffers from missing modalities. To address this, we propose a modality-agnostic style transfer framework that, for the first time, disentangles AD-specific content representations from modality-specific style representations. Our method jointly optimizes domain-adversarial training and generative adversarial networks to achieve high-fidelity cross-modal image synthesis. Trained end-to-end on the ADNI dataset, the synthesized images exhibit minimal statistical deviation from real data (mean Cohen’s *d* < 0.19), preserve discriminative AD biomarkers, and demonstrate clinical utility. This approach significantly reduces the need for redundant imaging examinations and overcomes fundamental limitations of conventional unimodal imputation methods.
📝 Abstract
Characterizing a preclinical stage of Alzheimer’s Disease (AD) via single imaging is difficult as its early symptoms are quite subtle. Therefore, many neuroimaging studies are curated with various imaging modalities, e.g., MRI and PET, however, it is often challenging to acquire all of them from all subjects and missing data become inevitable. In this regards, in this paper, we propose a framework that generates unobserved imaging measures for specific subjects using their existing measures, thereby reducing the need for additional examinations. Our framework transfers modality-specific style while preserving AD-specific content. This is done by domain adversarial training that preserves modality-agnostic but AD-specific information, while a generative adversarial network adds an indistinguishable modality-specific style. Our proposed framework is evaluated on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) study and compared with other imputation methods in terms of generated data quality. Small average Cohen’s d < 0.19 between our generated measures and real ones suggests that the synthetic data are practically usable regardless of their modality type.