🤖 AI Summary
To address the inherent spatiotemporal heterogeneity in EEG signal decoding, this paper proposes EEG-CSANet—a Centralized Sparse Attention Network. The model employs a dual-branch architecture: a primary branch captures global spatiotemporal patterns across multiple temporal scales via multi-scale self-attention, while an auxiliary branch enhances localized interactions among salient electrodes through sparse cross-attention. Complemented by multi-branch temporal decomposition, the framework enables collaborative disentanglement and fusion of spatiotemporal features. Evaluated on five benchmark datasets—BCIC-IV-2A, BCIC-IV-2B, HGD, SEED, and SEED-VIG—EEG-CSANet achieves state-of-the-art classification accuracies ranging from 88.54% to 99.43%. It demonstrates superior generalizability and robustness against inter-subject and inter-session variability, establishing a novel paradigm for adaptive, high-performance brain–computer interfaces.
📝 Abstract
Electroencephalography (EEG) signal decoding is a key technology that translates brain activity into executable commands, laying the foundation for direct brain-machine interfacing and intelligent interaction. To address the inherent spatiotemporal heterogeneity of EEG signals, this paper proposes a multi-branch parallel architecture, where each temporal scale is equipped with an independent spatial feature extraction module. To further enhance multi-branch feature fusion, we propose a Fusion of Multiscale Features via Centralized Sparse-attention Network (EEG-CSANet), a centralized sparse-attention network. It employs a main-auxiliary branch architecture, where the main branch models core spatiotemporal patterns via multiscale self-attention, and the auxiliary branch facilitates efficient local interactions through sparse cross-attention. Experimental results show that EEG-CSANet achieves state-of-the-art (SOTA) performance across five public datasets (BCIC-IV-2A, BCIC-IV-2B, HGD, SEED, and SEED-VIG), with accuracies of 88.54%, 91.09%, 99.43%, 96.03%, and 90.56%, respectively. Such performance demonstrates its strong adaptability and robustness across various EEG decoding tasks. Moreover, extensive ablation studies are conducted to enhance the interpretability of EEG-CSANet. In the future, we hope that EEG-CSANet could serve as a promising baseline model in the field of EEG signal decoding. The source code is publicly available at: https://github.com/Xiangrui-Cai/EEG-CSANet