xEEGNet: Towards Explainable AI in EEG Dementia Classification

📅 2025-04-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited interpretability and poor generalizability of EEG-based classification models for dementia (Alzheimer’s disease and frontotemporal dementia). We propose xEEGNet, a lightweight, fully interpretable neural network with only 168 parameters—the first “minimalist + fully interpretable” architecture for EEG analysis. It integrates band-selective filtering, learnable topographic mapping, and embedded representation analysis, built upon a modified ShallowNet backbone. Evaluated via nested leave-N-subjects cross-validation, xEEGNet achieves median classification accuracy nearly matching ShallowNet (only 1.5% lower) while substantially mitigating overfitting and reducing inter-subject performance variance. Crucially, the separation between control and Alzheimer’s subjects in the learned embedding space strongly correlates with classification accuracy—demonstrating both clinical interpretability and robustness. This paradigm shift transforms EEG-based dementia classification from a black-box to a trustworthy white-box framework.

Technology Category

Application Category

📝 Abstract
This work presents xEEGNet, a novel, compact, and explainable neural network for EEG data analysis. It is fully interpretable and reduces overfitting through major parameter reduction. As an applicative use case, we focused on classifying common dementia conditions, Alzheimer's and frontotemporal dementia, versus controls. xEEGNet is broadly applicable to other neurological conditions involving spectral alterations. We initially used ShallowNet, a simple and popular model from the EEGNet-family. Its structure was analyzed and gradually modified to move from a"black box"to a more transparent model, without compromising performance. The learned kernels and weights were examined from a clinical standpoint to assess medical relevance. Model variants, including ShallowNet and the final xEEGNet, were evaluated using robust Nested-Leave-N-Subjects-Out cross-validation for unbiased performance estimates. Variability across data splits was explained using embedded EEG representations, grouped by class and set, with pairwise separability to quantify group distinction. Overfitting was assessed through training-validation loss correlation and training speed. xEEGNet uses only 168 parameters, 200 times fewer than ShallowNet, yet retains interpretability, resists overfitting, achieves comparable median performance (-1.5%), and reduces variability across splits. This variability is explained by embedded EEG representations: higher accuracy correlates with greater separation between test set controls and Alzheimer's cases, without significant influence from training data. xEEGNet's ability to filter specific EEG bands, learn band-specific topographies, and use relevant spectral features demonstrates its interpretability. While large deep learning models are often prioritized for performance, this study shows smaller architectures like xEEGNet can be equally effective in EEG pathology classification.
Problem

Research questions and friction points this paper is trying to address.

Developing explainable AI for EEG dementia classification
Reducing model complexity while maintaining interpretability
Improving generalization in EEG-based neurological disorder diagnosis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compact neural network with 168 parameters
Explainable AI for EEG dementia classification
Reduces overfitting while retaining interpretability
🔎 Similar Papers
No similar papers found.
A
A. Zanola
Department of Neuroscience, University of Padua, Padua, 35128, Italy; Padua Neuroscience Center, Padua, 35128, Italy; Department of Information Engineering, University of Padua, Padua, 35128, Italy
L
L. F. Tshimanga
Department of Neuroscience, University of Padua, Padua, 35128, Italy; Padua Neuroscience Center, Padua, 35128, Italy; Department of Information Engineering, University of Padua, Padua, 35128, Italy
Federico Del Pup
Federico Del Pup
Department of Information Engineering, University of Padova
biomedical data analysismachine learning
M
Marco Baiesi
Department of Physics and Astronomy, University of Padua, Padua, 35128, Italy; INFN, Section of Padua, Padua, 35128, Italy
M
M. Atzori
Department of Neuroscience, University of Padua, Padua, 35128, Italy; Padua Neuroscience Center, Padua, 35128, Italy; Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), 3960 Sierre