🤖 AI Summary
To address the low diagnostic accuracy and poor interpretability of automated MRI-based classification for multiple brain disorders—including Alzheimer’s disease and brain tumors—this paper proposes a hybrid deep learning architecture featuring dual backbones: VGG16 and DenseNet121. The model employs feature-level fusion to jointly leverage hierarchical spatial representations and dense feature reuse, while integrating Grad-CAM for transparent, class-specific decision visualization. To our knowledge, this is the first framework achieving a closed-loop balance between high accuracy and strong interpretability in multi-class brain disease classification. Leveraging transfer learning and joint training on multi-center MRI data, the model achieves 91.33% accuracy on a fused BraTS 2021 and Kaggle dataset, with precision, recall, and F1-score all exceeding 91%, significantly outperforming single-backbone baselines.
📝 Abstract
Accurate diagnosis of brain disorders such as Alzheimer's disease and brain tumors remains a critical challenge in medical imaging. Conventional methods based on manual MRI analysis are often inefficient and error-prone. To address this, we propose DGG-XNet, a hybrid deep learning model integrating VGG16 and DenseNet121 to enhance feature extraction and classification. DenseNet121 promotes feature reuse and efficient gradient flow through dense connectivity, while VGG16 contributes strong hierarchical spatial representations. Their fusion enables robust multiclass classification of neurological conditions. Grad-CAM is applied to visualize salient regions, enhancing model transparency. Trained on a combined dataset from BraTS 2021 and Kaggle, DGG-XNet achieved a test accuracy of 91.33%, with precision, recall, and F1-score all exceeding 91%. These results highlight DGG-XNet's potential as an effective and interpretable tool for computer-aided diagnosis (CAD) of neurodegenerative and oncological brain disorders.