BrainPro: Towards Large-scale Brain State-aware EEG Representation Learning

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing EEG foundation models struggle to effectively model spatial interactions among channels or brain regions and are hindered by inconsistent electrode layouts across datasets, often compromising either flexibility or spatial modeling fidelity. Moreover, prevailing self-supervised pretraining approaches lack explicit disentanglement of diverse brain states (e.g., emotion, motor activity), resulting in suboptimal representation generalizability. To address these limitations, we propose a brain-state-aware EEG representation learning framework comprising: (1) a retrieval-based spatial learning module that adaptively models channel- and region-level associations under arbitrary electrode configurations; and (2) a brain-state disentanglement module integrating parallel encoders, state-disentanglement loss, and region-aware reconstruction loss to enable state-aware self-supervised representation learning. Evaluated on nine public BCI datasets, our framework achieves state-of-the-art performance and demonstrates significantly improved generalization and robustness across cross-device and cross-state tasks.

Technology Category

Application Category

📝 Abstract
Electroencephalography (EEG) is a non-invasive technique for recording brain electrical activity, widely used in brain-computer interface (BCI) and healthcare. Recent EEG foundation models trained on large-scale datasets have shown improved performance and generalizability over traditional decoding methods, yet significant challenges remain. Existing models often fail to explicitly capture channel-to-channel and region-to-region interactions, which are critical sources of information inherently encoded in EEG signals. Due to varying channel configurations across datasets, they either approximate spatial structure with self-attention or restrict training to a limited set of common channels, sacrificing flexibility and effectiveness. Moreover, although EEG datasets reflect diverse brain states such as emotion, motor, and others, current models rarely learn state-aware representations during self-supervised pre-training. To address these gaps, we propose BrainPro, a large EEG model that introduces a retrieval-based spatial learning block to flexibly capture channel- and region-level interactions across varying electrode layouts, and a brain state-decoupling block that enables state-aware representation learning through parallel encoders with decoupling and region-aware reconstruction losses. This design allows BrainPro to adapt seamlessly to diverse tasks and hardware settings. Pre-trained on an extensive EEG corpus, BrainPro achieves state-of-the-art performance and robust generalization across nine public BCI datasets. Our codes and the pre-trained weights will be released.
Problem

Research questions and friction points this paper is trying to address.

Captures channel and region interactions in EEG signals
Learns brain state-aware representations during pre-training
Adapts flexibly to varying electrode layouts and tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Retrieval-based spatial learning for EEG channel interactions
Brain state-decoupling with parallel encoder architecture
Region-aware reconstruction losses for state-aware representations
🔎 Similar Papers
No similar papers found.