🤖 AI Summary
To address the challenges of integrating heterogeneous clinical data and poor interpretability of deep learning models in Alzheimer’s disease (AD) diagnosis, this paper proposes a neuro-symbolic collaborative framework. It employs a 3D CNN to extract features from MRI scans, fine-tunes a medical large language model (e.g., Med-PaLM) to encode clinical guidelines and biomarker knowledge, and implements a logic-programming–based (Datalog) symbolic reasoning engine for rule-driven decision-making. We introduce, for the first time, an LLM-guided dynamic coupling mechanism between symbolic reasoning and neural perception modules, enabling end-to-end traceability, editable rules, and verifiable outputs. Evaluated on the ADNI dataset, our method achieves 92.41% accuracy and 89.67% F1-score—surpassing state-of-the-art methods by 2.91% and 3.43%, respectively—and generates natural-language diagnostic rationales with 91.2% explanation consistency as validated by clinical experts.
📝 Abstract
Alzheimer's disease (AD) diagnosis is complex, requiring the integration of imaging and clinical data for accurate assessment. While deep learning has shown promise in brain MRI analysis, it often functions as a black box, limiting interpretability and lacking mechanisms to effectively integrate critical clinical data such as biomarkers, medical history, and demographic information. To bridge this gap, we propose NeuroSymAD, a neuro-symbolic framework that synergizes neural networks with symbolic reasoning. A neural network percepts brain MRI scans, while a large language model (LLM) distills medical rules to guide a symbolic system in reasoning over biomarkers and medical history. This structured integration enhances both diagnostic accuracy and explainability. Experiments on the ADNI dataset demonstrate that NeuroSymAD outperforms state-of-the-art methods by up to 2.91% in accuracy and 3.43% in F1-score while providing transparent and interpretable diagnosis.