Explainable Graph Neural Networks: Understanding Brain Connectivity and Biomarkers in Dementia

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Dementia exhibits pronounced clinical and biological heterogeneity, complicating accurate diagnosis and reliable subtype differentiation. To address key limitations of existing graph neural networks (GNNs)—including poor robustness, limited interpretability, and data scarcity—in modeling brain connectomes, this study presents a systematic review of eXplainable GNNs (XGNNs) for dementia research. We propose the first taxonomy of XGNN interpretability methods specifically tailored to dementia tasks, encompassing multi-disease diagnostic scenarios such as Alzheimer’s and Parkinson’s diseases. By integrating GNNs with diverse attribution techniques, we elucidate region-specific abnormalities and large-scale network disruption patterns. Furthermore, we prospectively examine synergistic integration pathways between XGNNs and large language models (LLMs) for early dementia detection. Our analysis highlights the critical role of XGNNs in enhancing diagnostic transparency, identifying candidate neuroimaging biomarkers, and facilitating clinical translation—while also delineating persistent challenges in generalizability, validation, and real-world deployment.

Technology Category

Application Category

📝 Abstract
Dementia is a progressive neurodegenerative disorder with multiple etiologies, including Alzheimer's disease, Parkinson's disease, frontotemporal dementia, and vascular dementia. Its clinical and biological heterogeneity makes diagnosis and subtype differentiation highly challenging. Graph Neural Networks (GNNs) have recently shown strong potential in modeling brain connectivity, but their limited robustness, data scarcity, and lack of interpretability constrain clinical adoption. Explainable Graph Neural Networks (XGNNs) have emerged to address these barriers by combining graph-based learning with interpretability, enabling the identification of disease-relevant biomarkers, analysis of brain network disruptions, and provision of transparent insights for clinicians. This paper presents the first comprehensive review dedicated to XGNNs in dementia research. We examine their applications across Alzheimer's disease, Parkinson's disease, mild cognitive impairment, and multi-disease diagnosis. A taxonomy of explainability methods tailored for dementia-related tasks is introduced, alongside comparisons of existing models in clinical scenarios. We also highlight challenges such as limited generalizability, underexplored domains, and the integration of Large Language Models (LLMs) for early detection. By outlining both progress and open problems, this review aims to guide future work toward trustworthy, clinically meaningful, and scalable use of XGNNs in dementia research.
Problem

Research questions and friction points this paper is trying to address.

Diagnosing dementia subtypes is challenging due to clinical heterogeneity
Graph Neural Networks lack interpretability for clinical adoption
Identifying dementia biomarkers requires transparent brain network analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combining graph-based learning with interpretability methods
Identifying disease-relevant biomarkers from brain connectivity
Providing transparent insights for clinical dementia diagnosis
🔎 Similar Papers
No similar papers found.