🤖 AI Summary
This study addresses the lack of a unified evaluation framework and systematic understanding of foundation models for brain signals. To this end, we present Brain4FMs, the first benchmarking platform specifically designed for neural signal foundation models such as EEG and intracranial EEG. The platform introduces a modular, open, and standardized architecture enabling fair cross-model and cross-task comparisons. It integrates 15 representative self-supervised foundation models and 18 public datasets spanning multiple neural signal modalities. We systematically evaluate how pretraining data, self-supervised learning strategies, and model architectures influence generalization performance. This work establishes the first taxonomy and unified evaluation protocol for brain signal foundation models, thereby advancing the development of more accurate and transferable neural signal modeling approaches.
📝 Abstract
Brain Foundation Models (BFMs) are transforming neuroscience by enabling scalable and transferable learning from neural signals, advancing both clinical diagnostics and cutting-edge neuroscience exploration. Their emergence is powered by large-scale clinical recordings, particularly electroencephalography (EEG) and intracranial EEG, which provide rich temporal and spatial representations of brain dynamics. However, despite their rapid proliferation, the field lacks a unified understanding of existing methodologies and a standardized evaluation framework. To fill this gap, we map the benchmark design space along two axes: (i) from the model perspective, we organize BFMs under a self-supervised learning (SSL) taxonomy; and (ii) from the dataset perspective, we summarize common downstream tasks and curate representative public datasets across clinical and human-centric neurotechnology applications. Building on this consolidation, we introduce Brain4FMs, an open evaluation platform with plug-and-play interfaces that integrates 15 representative BFMs and 18 public datasets. It enables standardized comparisons and analysis of how pretraining data, SSL strategies, and architectures affect generalization and downstream performance, guiding more accurate and transferable BFMs. The code is available at https://anonymous.4open.science/r/Brain4FMs-85B8.