🤖 AI Summary
To address EEG-based emotion recognition under the challenging zero-target-domain scenario—where no labeled or unlabeled target-domain data is available—this paper proposes MATL-DC, a Multi-domain Aggregated Transfer Learning framework with Domain-Class disentanglement. MATL-DC decomposes neural representations into domain-invariant and class-discriminative components via a dual-invariance mechanism, constructs a unified “super-domain” space by aggregating multiple source domains, and incorporates class-prototype modeling and pairwise contrastive learning to enable robust cross-domain knowledge transfer without any target-domain supervision. The model is end-to-end trainable and significantly enhances generalization to unseen target domains. Evaluated on SEED, SEED-IV, and SEED-V, it achieves accuracies of 84.70%, 68.11%, and 61.08%, respectively—matching or surpassing state-of-the-art transfer methods that require target-domain data. This work establishes a novel paradigm for zero-shot EEG emotion recognition in real-world deployment.
📝 Abstract
Emotion recognition based on electroencephalography (EEG) signals is increasingly becoming a key research hotspot in affective Brain-Computer Interfaces (aBCIs). However, the current transfer learning model greatly depends on the source domain and target domain data, which hinder the practical application of emotion recognition. Therefore, we propose a Multi-domain Aggregation Transfer Learning framework for EEG emotion recognition with Domain-Class prototype under unseen targets (MATL-DC). We design the feature decoupling module to decouple class-invariant domain features from domain-invariant class features from shallow features. In the model training stage, the multi-domain aggregation mechanism aggregates the domain feature space to form a superdomain, which enhances the characteristics of emotional EEG signals. In each superdomain, we further extract the class prototype representation by class features. In addition, we adopt the pairwise learning strategy to transform the sample classification problem into the similarity problem between sample pairs, which effectively alleviates the influence of label noise. It is worth noting that the target domain is completely unseen during the training process. In the inference stage, we use the trained domain-class prototypes for inference, and then realize emotion recognition. We rigorously validate it on the publicly available databases (SEED, SEED-IV and SEED-V). The results show that the accuracy of MATL-DC model is 84.70%, 68.11% and 61.08%, respectively. MATL-DC achieves comparable or even better performance than methods that rely on both source and target domains. The source code is available at https://github.com/WuCB-BCI/MATL-DC.