π€ AI Summary
To address information loss and poor generalizability in EEG-based Alzheimerβs disease (AD) diagnosis caused by manual feature engineering, this paper proposes the first multi-granularity Transformer model specifically designed for EEG signals. Methodologically, it introduces a novel multi-granularity data embedding scheme and a cross-granularity self-attention mechanism to jointly model local temporal dynamics and global spatial topology, enabling end-to-end subject-independent training. Our key contribution is the first systematic evaluation of cross-subject robustness across five large-scale EEG datasets comprising 525 participants. Without subject-specific calibration, the model achieves F1-scores of 75.19% (on a cohort of 65 subjects) and 93.58% (on a cohort of 126 subjects), significantly outperforming both CNN-based and handcrafted-feature approaches. This establishes a scalable, non-invasive, and cost-effective paradigm for AD screening with strong generalizability.
π Abstract
Electroencephalogram (EEG) has emerged as a cost-effective and efficient method for supporting neurologists in assessing Alzheimer's disease (AD). Existing approaches predominantly utilize handcrafted features or Convolutional Neural Network (CNN)-based methods. However, the potential of the transformer architecture, which has shown promising results in various time series analysis tasks, remains underexplored in interpreting EEG for AD assessment. Furthermore, most studies are evaluated on the subject-dependent setup but often overlook the significance of the subject-independent setup. To address these gaps, we present ADformer, a novel multi-granularity transformer designed to capture temporal and spatial features to learn effective EEG representations. We employ multi-granularity data embedding across both dimensions and utilize self-attention to learn local features within each granularity and global features among different granularities. We conduct experiments across 5 datasets with a total of 525 subjects in setups including subject-dependent, subject-independent, and leave-subjects-out. Our results show that ADformer outperforms existing methods in most evaluations, achieving F1 scores of 75.19% and 93.58% on two large datasets with 65 subjects and 126 subjects, respectively, in distinguishing AD and healthy control (HC) subjects under the challenging subject-independent setup.