🤖 AI Summary
This work addresses the limitations of existing signal-reconstruction-based self-supervised methods for building EEG foundation models, which are highly susceptible to high-variance artifacts and struggle to learn task-relevant neural representations, thereby exhibiting constrained transferability. To overcome this, we propose Laya—the first EEG foundation model based on the LeJEPA (Latent Embedding Joint-Embedding Predictive Architecture) framework—which abandons direct reconstruction of raw signals in favor of predicting stable representations in latent space, substantially enhancing semantic relevance and transfer performance. Leveraging a stabilized variant of the Joint Embedding Predictive Architecture (JEPA), Laya integrates self-supervised pretraining with linear probing evaluation and achieves significant improvements over reconstruction-based baselines across multiple EEG benchmarks, effectively validating the efficacy of latent predictive modeling for EEG representation learning.
📝 Abstract
Electroencephalography (EEG) is a widely used tool for studying brain function, with applications in clinical neuroscience, diagnosis, and brain-computer interfaces (BCIs). Recent EEG foundation models trained on large unlabeled corpora aim to learn transferable representations, but their effectiveness remains unclear; reported improvements over smaller task-specific models are often modest, sensitive to downstream adaptation and fine-tuning strategies, and limited under linear probing. We hypothesize that one contributing factor is the reliance on signal reconstruction as the primary self-supervised learning (SSL) objective, which biases representations toward high-variance artifacts rather than task-relevant neural structure. To address this limitation, we explore an SSL paradigm based on Joint Embedding Predictive Architectures (JEPA), which learn by predicting latent representations instead of reconstructing raw signals. While earlier JEPA-style methods often rely on additional heuristics to ensure training stability, recent advances such as LeJEPA provide a more principled and stable formulation. We introduce Laya, the first EEG foundation model based on LeJEPA. Across a range of EEG benchmarks, Laya demonstrates improved performance under linear probing compared to reconstruction-based baselines, suggesting that latent predictive objectives offer a promising direction for learning transferable, high-level EEG representations.