BrainOmni: A Brain Foundation Model for Unified EEG and MEG Signals

📅 2025-05-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
EEG and MEG signals exhibit modality heterogeneity, diverse sensor configurations, and poor generalizability of existing models. Method: We propose the first unified EEG-MEG foundation model. It introduces BrainTokenizer—a novel spatiotemporal tokenization scheme—and a sensor encoder that explicitly models spatial layout, orientation, and sensor type. A cross-modal self-supervised pretraining framework is designed to jointly learn representations from heterogeneous data. The model is trained on 1,997 hours of EEG and 656 hours of MEG recordings—the first large-scale MEG pretraining effort. Contributions/Results: (1) It achieves state-of-the-art performance across diverse downstream tasks; (2) it enables zero-shot transfer to unseen recording devices; and (3) joint multimodal pretraining significantly enhances representation consistency and task performance for both modalities. This work establishes a new paradigm for universal representation learning of neural electrophysiological signals.

Technology Category

Application Category

📝 Abstract
Electroencephalography (EEG) and magnetoencephalography (MEG) measure neural activity non-invasively by capturing electromagnetic fields generated by dendritic currents. Although rooted in the same biophysics, EEG and MEG exhibit distinct signal patterns, further complicated by variations in sensor configurations across modalities and recording devices. Existing approaches typically rely on separate, modality- and dataset-specific models, which limits the performance and cross-domain scalability. This paper proposes BrainOmni, the first brain foundation model that generalises across heterogeneous EEG and MEG recordings. To unify diverse data sources, we introduce BrainTokenizer,the first tokenizer that quantises spatiotemporal brain activity into discrete representations. Central to BrainTokenizer is a novel Sensor Encoder that encodes sensor properties such as spatial layout, orientation, and type, enabling compatibility across devices and modalities. Building upon the discrete representations, BrainOmni learns unified semantic embeddings of brain signals by self-supervised pretraining. To the best of our knowledge, it is the first foundation model to support both EEG and MEG signals, as well as the first to incorporate large-scale MEG pretraining. A total of 1,997 hours of EEG and 656 hours of MEG data are curated and standardised from publicly available sources for pretraining. Experiments show that BrainOmni outperforms both existing foundation models and state-of-the-art task-specific models on a range of downstream tasks. It also demonstrates strong generalisation to unseen EEG and MEG devices. Further analysis reveals that joint EEG-MEG (EMEG) training yields consistent improvements across both modalities. Code and model checkpoints will be released upon acceptance.
Problem

Research questions and friction points this paper is trying to address.

Unifying EEG and MEG signals into a single model
Overcoming modality-specific limitations in brain signal analysis
Enhancing cross-device and cross-modality compatibility in neural recordings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified EEG and MEG brain foundation model
Tokenizes brain activity into discrete representations
Self-supervised pretraining for semantic embeddings
🔎 Similar Papers
No similar papers found.
Q
Qinfan Xiao
Department of Electronic Engineering, Tsinghua University, China
Ziyun Cui
Ziyun Cui
Tsinghua University
C
Chi Zhang
Shanghai Artificial Intelligence Laboratory, China
S
Siqi Chen
Department of Electronic Engineering, Tsinghua University, China
W
Wen Wu
Shanghai Artificial Intelligence Laboratory, China
A
Andrew Thwaites
Department of Psychology, University of Cambridge, UK; Speech Hearing and Phonetic Sciences, University College London, UK
Alexandra Woolgar
Alexandra Woolgar
Professor of Integrative and Systems Neuroscience
human cognitive neuroscienceattentioncognitive controlneuroimaging
B
Bowen Zhou
Shanghai Artificial Intelligence Laboratory, China
C
Chao Zhang
Shanghai Artificial Intelligence Laboratory, China; Department of Electronic Engineering, Tsinghua University, China; Speech Hearing and Phonetic Sciences, University College London, UK