🤖 AI Summary
This work addresses the limited interpretability and poor generalizability of EEG-based classification models for dementia (Alzheimer’s disease and frontotemporal dementia). We propose xEEGNet, a lightweight, fully interpretable neural network with only 168 parameters—the first “minimalist + fully interpretable” architecture for EEG analysis. It integrates band-selective filtering, learnable topographic mapping, and embedded representation analysis, built upon a modified ShallowNet backbone. Evaluated via nested leave-N-subjects cross-validation, xEEGNet achieves median classification accuracy nearly matching ShallowNet (only 1.5% lower) while substantially mitigating overfitting and reducing inter-subject performance variance. Crucially, the separation between control and Alzheimer’s subjects in the learned embedding space strongly correlates with classification accuracy—demonstrating both clinical interpretability and robustness. This paradigm shift transforms EEG-based dementia classification from a black-box to a trustworthy white-box framework.
📝 Abstract
This work presents xEEGNet, a novel, compact, and explainable neural network for EEG data analysis. It is fully interpretable and reduces overfitting through major parameter reduction. As an applicative use case, we focused on classifying common dementia conditions, Alzheimer's and frontotemporal dementia, versus controls. xEEGNet is broadly applicable to other neurological conditions involving spectral alterations. We initially used ShallowNet, a simple and popular model from the EEGNet-family. Its structure was analyzed and gradually modified to move from a"black box"to a more transparent model, without compromising performance. The learned kernels and weights were examined from a clinical standpoint to assess medical relevance. Model variants, including ShallowNet and the final xEEGNet, were evaluated using robust Nested-Leave-N-Subjects-Out cross-validation for unbiased performance estimates. Variability across data splits was explained using embedded EEG representations, grouped by class and set, with pairwise separability to quantify group distinction. Overfitting was assessed through training-validation loss correlation and training speed. xEEGNet uses only 168 parameters, 200 times fewer than ShallowNet, yet retains interpretability, resists overfitting, achieves comparable median performance (-1.5%), and reduces variability across splits. This variability is explained by embedded EEG representations: higher accuracy correlates with greater separation between test set controls and Alzheimer's cases, without significant influence from training data. xEEGNet's ability to filter specific EEG bands, learn band-specific topographies, and use relevant spectral features demonstrates its interpretability. While large deep learning models are often prioritized for performance, this study shows smaller architectures like xEEGNet can be equally effective in EEG pathology classification.