One Model for All: Universal Pre-training for EEG based Emotion Recognition across Heterogeneous Datasets and Paradigms

📅 2025-11-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
EEG-based emotion recognition suffers from poor model generalizability and limited cross-paradigm transferability due to dataset heterogeneity—particularly variations in channel configurations and subject-specific characteristics. To address this, we propose the first unified pretraining framework for multi-source EEG data. Our approach first standardizes inputs via a Unified Channel Schema (UCS), then employs a two-stage decoupled learning strategy to model cross-dataset knowledge. We further integrate an Adaptive Resampling Transformer (ART) with a Graph Attention Network (GAT) to jointly capture spatiotemporal dynamics and enhance noise robustness. Additionally, self-supervised contrastive learning is incorporated to improve representation discriminability. Extensive experiments on SEED, DEAP, and DREAMER achieve state-of-the-art performance. Crucially, our cross-dataset transfer results significantly outperform in-domain pretraining: improvements of +7.65% on DEAP and +3.55% on DREAMER demonstrate superior generalization across heterogeneous EEG sources.

Technology Category

Application Category

📝 Abstract
EEG-based emotion recognition is hampered by profound dataset heterogeneity (channel/subject variability), hindering generalizable models. Existing approaches struggle to transfer knowledge effectively. We propose'One Model for All', a universal pre-training framework for EEG analysis across disparate datasets. Our paradigm decouples learning into two stages: (1) Univariate pre-training via self-supervised contrastive learning on individual channels, enabled by a Unified Channel Schema (UCS) that leverages the channel union (e.g., SEED-62ch, DEAP-32ch); (2) Multivariate fine-tuning with a novel'ART'(Adaptive Resampling Transformer) and'GAT'(Graph Attention Network) architecture to capture complex spatio-temporal dependencies. Experiments show universal pre-training is an essential stabilizer, preventing collapse on SEED (vs. scratch) and yielding substantial gains on DEAP (+7.65%) and DREAMER (+3.55%). Our framework achieves new SOTA performance on all within-subject benchmarks: SEED (99.27%), DEAP (93.69%), and DREAMER (93.93%). We also show SOTA cross-dataset transfer, achieving 94.08% (intersection) and 93.05% (UCS) on the unseen DREAMER dataset, with the former surpassing the within-domain pre-training benchmark. Ablation studies validate our architecture: the GAT module is critical, yielding a +22.19% gain over GCN on the high-noise DEAP dataset, and its removal causes a catastrophic -16.44% performance drop. This work paves the way for more universal, scalable, and effective pre-trained models for diverse EEG analysis tasks.
Problem

Research questions and friction points this paper is trying to address.

Addresses dataset heterogeneity in EEG emotion recognition across different studies
Overcomes knowledge transfer limitations between diverse EEG recording setups
Develops universal pre-training for generalizable cross-dataset EEG analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Universal pre-training framework for EEG across datasets
Univariate contrastive learning with Unified Channel Schema
Multivariate fine-tuning using ART and GAT architecture
🔎 Similar Papers
No similar papers found.
X
Xiang Li
Key Laboratory of Computing Power Network and Information Security, Ministry of Education, Shandong Computer Science Center (National Supercomputer Center in Jinan), Qilu University of Technology (Shandong Academy of Sciences), Jinan, China and Shandong Provincial Key Laboratory of Computing Power Internet and Service Computing, Shandong Fundamental Research Center for Computer Science, Jinan, China
Y
You Li
Key Laboratory of Computing Power Network and Information Security, Ministry of Education, Shandong Computer Science Center (National Supercomputer Center in Jinan), Qilu University of Technology (Shandong Academy of Sciences), Jinan, China and Shandong Provincial Key Laboratory of Computing Power Internet and Service Computing, Shandong Fundamental Research Center for Computer Science, Jinan, China
Yazhou Zhang
Yazhou Zhang
Associate Professor, Tianjin University
Sentiment AnalysisQuantum CognitionSarcasm DetectionHumor Analysis