FEMBA: Efficient and Scalable EEG Analysis with a Bidirectional Mamba Foundation Model

📅 2025-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the quadratic time and memory complexity of Transformers in long-duration electroencephalogram (EEG) analysis—hindering deployment in resource-constrained settings such as wearable devices and real-time clinical diagnostics—this work introduces the first foundational EEG model based on bidirectional state-space modeling. Leveraging the Mamba architecture, the model achieves linear temporal and spatial complexity. We propose the first self-supervised pretraining framework tailored to large-scale unlabeled EEG data (over 21,000 hours) and integrate lightweight compression techniques for efficient inference. Evaluated on the TUAB and TUAR benchmarks, the model attains a balanced accuracy of 81.82% (AUROC = 0.8921) and an AUROC of 0.949, respectively, with only 7.8 million parameters—enabling efficient edge deployment.

Technology Category

Application Category

📝 Abstract
Accurate and efficient electroencephalography (EEG) analysis is essential for detecting seizures and artifacts in long-term monitoring, with applications spanning hospital diagnostics to wearable health devices. Robust EEG analytics have the potential to greatly improve patient care. However, traditional deep learning models, especially Transformer-based architectures, are hindered by their quadratic time and memory complexity, making them less suitable for resource-constrained environments. To address these challenges, we present FEMBA (Foundational EEG Mamba + Bidirectional Architecture), a novel self-supervised framework that establishes new efficiency benchmarks for EEG analysis through bidirectional state-space modeling. Unlike Transformer-based models, which incur quadratic time and memory complexity, FEMBA scales linearly with sequence length, enabling more scalable and efficient processing of extended EEG recordings. Trained on over 21,000 hours of unlabeled EEG and fine-tuned on three downstream tasks, FEMBA achieves competitive performance in comparison with transformer models, with significantly lower computational cost. Specifically, it reaches 81.82% balanced accuracy (0.8921 AUROC) on TUAB and 0.949 AUROC on TUAR, while a tiny 7.8M-parameter variant demonstrates viability for resource-constrained devices. These results pave the way for scalable, general-purpose EEG analytics in both clinical and highlight FEMBA as a promising candidate for wearable applications.
Problem

Research questions and friction points this paper is trying to address.

EEG analysis efficiency
Scalable EEG processing
Resource-constrained environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bidirectional state-space modeling
Linear scalability with sequence length
Self-supervised framework for EEG analysis
🔎 Similar Papers
No similar papers found.