🤖 AI Summary
This paper systematically reviews recent advances in electroencephalography foundation models (EEG-FMs), emphasizing their intrinsic distinctions from general-purpose foundation models. Methodologically, it introduces the first structured review framework covering 12 mainstream architectures (e.g., Transformers, CNN-RNN hybrids), self-supervised pretraining paradigms (e.g., MAE, SimCLR), multimodal alignment techniques, lightweight deployment strategies, and integrates 30+ publicly available EEG datasets. It further proposes a three-dimensional evaluation framework and a cross-dataset transferability analysis protocol. Key contributions include: (i) identifying three fundamental bottlenecks—data bias, label scarcity, and limited clinical interpretability; and (ii) prospectively advocating neuro-symbolic integration and other explainable AI pathways to enhance transparency and clinical utility. The work establishes a unified technical roadmap and research agenda for intelligent EEG analysis.
📝 Abstract
Electroencephalogram (EEG) signals play a crucial role in understanding brain activity and diagnosing neurological disorders. This review focuses on the recent development of EEG foundation models(EEG-FMs), which have shown great potential in processing and analyzing EEG data. We discuss various EEG-FMs, including their architectures, pre-training strategies, their pre-training and downstream datasets and other details. The review also highlights the challenges and future directions in this field, aiming to provide a comprehensive overview for researchers and practitioners interested in EEG analysis and related EEG-FMs.