SingLEM: Single-Channel Large EEG Model

๐Ÿ“… 2025-09-22
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Current deep learning models for EEG are highly task-specific and rely on high-density, multi-channel data, limiting generalization to low-channel, missing-channel, or heterogeneous device settings. To address this, we propose the first hardware-agnostic, single-channel EEG self-supervised foundation model. Our method employs a hybrid encoder combining convolutional layers for local temporal feature extraction and hierarchical Transformers for modeling long-range temporal dependencies. The model is pretrained at scale on unlabeled EEG data to learn robust, transferable representations. When used as a fixed feature extractor with only a single-channel input, it achieves state-of-the-art performance across six motor imagery and cognitive tasksโ€”outperforming leading multi-channel foundation models and handcrafted feature approaches. Crucially, it demonstrates superior cross-task and cross-device adaptability, while enhancing neurophysiological interpretability through principled representation learning.

Technology Category

Application Category

๐Ÿ“ Abstract
Current deep learning models for electroencephalography (EEG) are often task-specific and depend on large labeled datasets, limiting their adaptability. Although emerging foundation models aim for broader applicability, their rigid dependence on fixed, high-density multi-channel montages restricts their use across heterogeneous datasets and in missing-channel or practical low-channel settings. To address these limitations, we introduce SingLEM, a self-supervised foundation model that learns robust, general-purpose representations from single-channel EEG, making it inherently hardware agnostic. The model employs a hybrid encoder architecture that combines convolutional layers to extract local features with a hierarchical transformer to model both short- and long-range temporal dependencies. SingLEM is pretrained on 71 public datasets comprising over 9,200 subjects and 357,000 single-channel hours of EEG. When evaluated as a fixed feature extractor across six motor imagery and cognitive tasks, aggregated single-channel representations consistently outperformed leading multi-channel foundation models and handcrafted baselines. These results demonstrate that a single-channel approach can achieve state-of-the-art generalization while enabling fine-grained neurophysiological analysis and enhancing interpretability. The source code and pretrained models are available at https://github.com/ttlabtuat/SingLEM.
Problem

Research questions and friction points this paper is trying to address.

Overcoming task-specific limitations of EEG models requiring large labeled datasets
Addressing rigid dependence on fixed multi-channel EEG montages across heterogeneous datasets
Enabling robust EEG analysis in missing-channel or practical low-channel settings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-supervised foundation model for single-channel EEG
Hybrid encoder with convolutional and hierarchical transformer layers
Pretrained on 71 datasets for robust general-purpose representations
๐Ÿ”Ž Similar Papers
No similar papers found.
J
Jamiyan Sukhbaatar
Department of Electronic and Information Engineering, Tokyo University of Agriculture and Technology, Koganei-Shi 184โ€“8588, Japan, and also with the Department of Electronics and Communication Engineering, National University of Mongolia, Ulaanbaatar, 14200 Mongolia
Satoshi Imamura
Satoshi Imamura
Department of Electronic and Information Engineering, Tokyo University of Agriculture and Technology, Koganei-Shi 184โ€“8588, Japan
Ibuki Inoue
Ibuki Inoue
Department of Electronic and Information Engineering, Tokyo University of Agriculture and Technology, Koganei-Shi 184โ€“8588, Japan
S
Shoya Murakami
Department of Electronic and Information Engineering, Tokyo University of Agriculture and Technology, Koganei-Shi 184โ€“8588, Japan
Kazi Mahmudul Hassan
Kazi Mahmudul Hassan
Assistant Professor, Dept. of Computer Science & Engineering, Jatiya Kabi Kazi Nazrul Islam
Brain Computer InterfacingBiomedical EngineeringMachine Learning
Seungwoo Han
Seungwoo Han
Tokyo University of Agriculture and Technology
neuromorphic computing
I
Ingon Chanpornpakdi
Department of Electronic and Information Engineering, Tokyo University of Agriculture and Technology, Koganei-Shi 184โ€“8588, Japan
Toshihisa Tanaka
Toshihisa Tanaka
Tokyo University of Agriculture and Technology
Signal ProcessingBiomedical EngineeringBiosignal Informatics