EEG-X: Device-Agnostic and Noise-Robust Foundation Model for EEG

📅 2025-11-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address cross-device data heterogeneity and low signal-to-noise ratio (SNR) in EEG analysis, this paper proposes a generalizable representation learning framework. Methodologically, it introduces position-aware channel embeddings and a dictionary-inspired convolutional transform (DiCT) layer to enable adaptive modeling of arbitrary electrode configurations; further, it integrates noise-aware masking, dual-space (raw/latent) joint denoising reconstruction, and DiCT-based feature projection to jointly optimize robust representation learning and signal recovery. Extensive evaluations on multi-device, multi-task, and cross-domain benchmarks demonstrate significant improvements over state-of-the-art methods—particularly under unseen device conditions and high-noise regimes—highlighting strong generalization and transferability. The code and pretrained models are publicly released to ensure reproducibility and facilitate real-world deployment.

Technology Category

Application Category

📝 Abstract
Foundation models for EEG analysis are still in their infancy, limited by two key challenges: (1) variability across datasets caused by differences in recording devices and configurations, and (2) the low signal-to-noise ratio (SNR) of EEG, where brain signals are often buried under artifacts and non-brain sources. To address these challenges, we present EEG-X, a device-agnostic and noise-robust foundation model for EEG representation learning. EEG-X introduces a novel location-based channel embedding that encodes spatial information and improves generalization across domains and tasks by allowing the model to handle varying channel numbers, combinations, and recording lengths. To enhance robustness against noise, EEG-X employs a noise-aware masking and reconstruction strategy in both raw and latent spaces. Unlike previous models that mask and reconstruct raw noisy EEG signals, EEG-X is trained to reconstruct denoised signals obtained through an artifact removal process, ensuring that the learned representations focus on neural activity rather than noise. To further enhance reconstruction-based pretraining, EEG-X introduces a dictionary-inspired convolutional transformation (DiCT) layer that projects signals into a structured feature space before computing reconstruction (MSE) loss, reducing noise sensitivity and capturing frequency- and shape-aware similarities. Experiments on datasets collected from diverse devices show that EEG-X outperforms state-of-the-art methods across multiple downstream EEG tasks and excels in cross-domain settings where pre-trained and downstream datasets differ in electrode layouts. The models and code are available at: https://github.com/Emotiv/EEG-X
Problem

Research questions and friction points this paper is trying to address.

Addressing EEG dataset variability from different recording devices
Improving EEG signal robustness against low signal-to-noise ratio
Enhancing cross-domain generalization for EEG representation learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Location-based channel embedding for cross-device generalization
Noise-aware masking reconstructs denoised neural signals
Dictionary-inspired convolutional transformation reduces noise sensitivity
🔎 Similar Papers
No similar papers found.
Navid Mohammadi Foumani
Navid Mohammadi Foumani
Monash University
Deep LearningFoundation ModelGenAI
S
Soheila Ghane
Emotiv Research, Melbourne, Australia
N
Nam Nguyen
Emotiv Research, Sydney, Australia
Mahsa Salehi
Mahsa Salehi
Senior Lecturer, Monash University
Anomaly DetectionTime Series AnalysisMachine LearningBrain EEG Analysis
G
Geoffrey I. Webb
Monash University, Melbourne, Australia
G
Geoffrey Mackellar
Emotiv Research, Sydney, Australia