MINT: Multimodal Imaging-to-Speech Knowledge Transfer for Early Alzheimer's Screening

📅 2026-02-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the limitations of speech-based screening for mild cognitive impairment (MCI)—namely, its lack of biological grounding—and the high cost and limited accessibility of neuroimaging approaches. To bridge this gap, the authors propose MINT, a novel framework that achieves cross-modal knowledge transfer from MRI to speech for the first time. By leveraging a frozen MRI teacher model, a residual projection head, a geometric loss function, and self-supervised pretraining, MINT embeds neuroimaging-derived biomarker structures into a speech encoder, aligning its representations with MRI decision boundaries without requiring imaging hardware. Evaluated on the ADNI-4 dataset, the speech-only model achieves an AUC of 0.720, comparable to a pure speech baseline (0.711), while multimodal fusion significantly outperforms MRI alone (0.973 vs. 0.958), offering a scalable and interpretable paradigm for MCI screening.

Technology Category

Application Category

📝 Abstract
Alzheimer's disease is a progressive neurodegenerative disorder in which mild cognitive impairment (MCI) marks a critical transition between aging and dementia. Neuroimaging modalities, such as structural MRI, provide biomarkers of this transition; however, their high costs and infrastructure needs limit their deployment at a population scale. Speech analysis offers a non-invasive alternative, but speech-only classifiers are developed independently of neuroimaging, leaving decision boundaries biologically ungrounded and limiting reliability on the subtle CN-versus-MCI distinction. We propose MINT (Multimodal Imaging-to-Speech Knowledge Transfer), a three-stage cross-modal framework that transfers biomarker structure from MRI into a speech encoder at training time. An MRI teacher, trained on 1,228 subjects, defines a compact neuroimaging embedding space for CN-versus-MCI classification. A residual projection head aligns speech representations to this frozen imaging manifold via a combined geometric loss, adapting speech to the learned biomarker space while preserving imaging encoder fidelity. The frozen MRI classifier, which is never exposed to speech, is applied to aligned embeddings at inference and requires no scanner. Evaluation on ADNI-4 shows aligned speech achieves performance comparable to speech-only baselines (AUC 0.720 vs 0.711) while requiring no imaging at inference, demonstrating that MRI-derived decision boundaries can ground speech representations. Multimodal fusion improves over MRI alone (0.973 vs 0.958). Ablation studies identify dropout regularization and self-supervised pretraining as critical design decisions. To our knowledge, this is the first demonstration of MRI-to-speech knowledge transfer for early Alzheimer's screening, establishing a biologically grounded pathway for population-level cognitive triage without neuroimaging at inference.
Problem

Research questions and friction points this paper is trying to address.

Alzheimer's disease
mild cognitive impairment
neuroimaging
speech analysis
early screening
Innovation

Methods, ideas, or system contributions that make the work stand out.

knowledge transfer
cross-modal alignment
neuroimaging embedding
speech representation
Alzheimer's screening
🔎 Similar Papers
No similar papers found.