ADformer: A Multi-Granularity Transformer for EEG-Based Alzheimer's Disease Assessment

πŸ“… 2024-08-17
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 3
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address information loss and poor generalizability in EEG-based Alzheimer’s disease (AD) diagnosis caused by manual feature engineering, this paper proposes the first multi-granularity Transformer model specifically designed for EEG signals. Methodologically, it introduces a novel multi-granularity data embedding scheme and a cross-granularity self-attention mechanism to jointly model local temporal dynamics and global spatial topology, enabling end-to-end subject-independent training. Our key contribution is the first systematic evaluation of cross-subject robustness across five large-scale EEG datasets comprising 525 participants. Without subject-specific calibration, the model achieves F1-scores of 75.19% (on a cohort of 65 subjects) and 93.58% (on a cohort of 126 subjects), significantly outperforming both CNN-based and handcrafted-feature approaches. This establishes a scalable, non-invasive, and cost-effective paradigm for AD screening with strong generalizability.

Technology Category

Application Category

πŸ“ Abstract
Electroencephalogram (EEG) has emerged as a cost-effective and efficient method for supporting neurologists in assessing Alzheimer's disease (AD). Existing approaches predominantly utilize handcrafted features or Convolutional Neural Network (CNN)-based methods. However, the potential of the transformer architecture, which has shown promising results in various time series analysis tasks, remains underexplored in interpreting EEG for AD assessment. Furthermore, most studies are evaluated on the subject-dependent setup but often overlook the significance of the subject-independent setup. To address these gaps, we present ADformer, a novel multi-granularity transformer designed to capture temporal and spatial features to learn effective EEG representations. We employ multi-granularity data embedding across both dimensions and utilize self-attention to learn local features within each granularity and global features among different granularities. We conduct experiments across 5 datasets with a total of 525 subjects in setups including subject-dependent, subject-independent, and leave-subjects-out. Our results show that ADformer outperforms existing methods in most evaluations, achieving F1 scores of 75.19% and 93.58% on two large datasets with 65 subjects and 126 subjects, respectively, in distinguishing AD and healthy control (HC) subjects under the challenging subject-independent setup.
Problem

Research questions and friction points this paper is trying to address.

Improves EEG-based Alzheimer detection with raw signal processing
Addresses information loss in large-scale EEG data analysis
Enhances model robustness across diverse demographic datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-granularity spatial-temporal transformer for EEG
Two-stage intra-inter granularity self-attention mechanism
End-to-end representation learning from raw EEG
πŸ”Ž Similar Papers
No similar papers found.
Yihe Wang
Yihe Wang
University of North Carolina at Charlotte
Deep LearningEEGBCIMedical Time-seriesFoundation Model
N
N. Mammone
DICEAM Department, University Mediterranea of Reggio Calabria Via Zehender, Feo di Vito, 89122 Reggio Calabria, Italy
D
Darina Petrovsky
School of Nursing, Duke University, Durham, North Carolina 27708, United States
A
A. Tzallas
Department of Informatics & Telecommunications, University of Ioannina, Kostakioi, GR-47100, Arta, Greece
F
F. Morabito
DICEAM Department, University Mediterranea of Reggio Calabria Via Zehender, Feo di Vito, 89122 Reggio Calabria, Italy
X
Xiang Zhang
Department of Computer Science, University of North Carolina - Charlotte, Charlotte, North Carolina 28223, United States