🤖 AI Summary
Existing EEG pre-trained models inadequately model neural oscillatory features, limiting generalization and performance across BCI tasks. To address this, we propose LaBraM++, the first foundational EEG model integrating signal-processing priors with codebook-enhanced representation learning. LaBraM++ introduces band-guided learnable vector quantization, time-frequency domain self-supervised pre-training, and a lightweight adaptation head—collectively overcoming representational capacity bottlenecks and significantly enhancing oscillatory information capture. Evaluated across diverse BCI paradigms—including motor imagery, SSVEP, and ERP classification—LaBraM++ consistently outperforms state-of-the-art baselines, achieving an average accuracy improvement of 4.2% and a 32% reduction in training time. It establishes new open-source SOTA performance among EEG foundation models.
📝 Abstract
Recent advances in large-scale pre-trained Electroencephalogram (EEG) models have shown great promise, driving progress in Brain-Computer Interfaces (BCIs) and healthcare applications. However, despite their success, many existing pre-trained models have struggled to fully capture the rich information content of neural oscillations, a limitation that fundamentally constrains their performance and generalizability across diverse BCI tasks. This limitation is frequently rooted in suboptimal architectural design choices which constrain their representational capacity. In this work, we introduce LaBraM++, an enhanced Large Brainwave Foundation Model (LBM) that incorporates principled improvements grounded in robust signal processing foundations. LaBraM++ demonstrates substantial gains across a variety of tasks, consistently outperforming its originally-based architecture and achieving competitive results when compared to other open-source LBMs. Its superior performance and training efficiency highlight its potential as a strong foundation for future advancements in LBMs.