๐ค AI Summary
Current deep learning models for EEG are highly task-specific and rely on high-density, multi-channel data, limiting generalization to low-channel, missing-channel, or heterogeneous device settings. To address this, we propose the first hardware-agnostic, single-channel EEG self-supervised foundation model. Our method employs a hybrid encoder combining convolutional layers for local temporal feature extraction and hierarchical Transformers for modeling long-range temporal dependencies. The model is pretrained at scale on unlabeled EEG data to learn robust, transferable representations. When used as a fixed feature extractor with only a single-channel input, it achieves state-of-the-art performance across six motor imagery and cognitive tasksโoutperforming leading multi-channel foundation models and handcrafted feature approaches. Crucially, it demonstrates superior cross-task and cross-device adaptability, while enhancing neurophysiological interpretability through principled representation learning.
๐ Abstract
Current deep learning models for electroencephalography (EEG) are often task-specific and depend on large labeled datasets, limiting their adaptability. Although emerging foundation models aim for broader applicability, their rigid dependence on fixed, high-density multi-channel montages restricts their use across heterogeneous datasets and in missing-channel or practical low-channel settings. To address these limitations, we introduce SingLEM, a self-supervised foundation model that learns robust, general-purpose representations from single-channel EEG, making it inherently hardware agnostic. The model employs a hybrid encoder architecture that combines convolutional layers to extract local features with a hierarchical transformer to model both short- and long-range temporal dependencies. SingLEM is pretrained on 71 public datasets comprising over 9,200 subjects and 357,000 single-channel hours of EEG. When evaluated as a fixed feature extractor across six motor imagery and cognitive tasks, aggregated single-channel representations consistently outperformed leading multi-channel foundation models and handcrafted baselines. These results demonstrate that a single-channel approach can achieve state-of-the-art generalization while enabling fine-grained neurophysiological analysis and enhancing interpretability. The source code and pretrained models are available at https://github.com/ttlabtuat/SingLEM.