EmT: A Novel Transformer for Generalized Cross-subject EEG Emotion Recognition

📅 2024-06-26
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Existing cross-subject EEG-based emotion recognition methods struggle to effectively model long-range temporal dependencies associated with affective cognition. To address this, we propose EmT, the first model that represents EEG signals as dynamic temporal graph sequences. EmT integrates three novel components: Temporal Graph Construction (TGC), Residual Multi-View Pyramid Graph Convolutional Networks (RMPG), and a Dual Hybrid Mechanism Temporal Context Transformer (TCT), collectively enabling explicit modeling of long-range cognitive temporal dependencies. The architecture unifies support for both emotion classification and regression tasks, with task-specific output modules ensuring flexible adaptation. Evaluated on four benchmark public EEG datasets, EmT achieves state-of-the-art performance in cross-subject emotion classification and regression, significantly outperforming existing graph neural network and Transformer-based baselines.

Technology Category

Application Category

📝 Abstract
Integrating prior knowledge of neurophysiology into neural network architecture enhances the performance of emotion decoding. While numerous techniques emphasize learning spatial and short-term temporal patterns, there has been limited emphasis on capturing the vital long-term contextual information associated with emotional cognitive processes. In order to address this discrepancy, we introduce a novel transformer model called emotion transformer (EmT). EmT is designed to excel in both generalized cross-subject EEG emotion classification and regression tasks. In EmT, EEG signals are transformed into a temporal graph format, creating a sequence of EEG feature graphs using a temporal graph construction module (TGC). A novel residual multi-view pyramid GCN module (RMPG) is then proposed to learn dynamic graph representations for each EEG feature graph within the series, and the learned representations of each graph are fused into one token. Furthermore, we design a temporal contextual transformer module (TCT) with two types of token mixers to learn the temporal contextual information. Finally, the task-specific output module (TSO) generates the desired outputs. Experiments on four publicly available datasets show that EmT achieves higher results than the baseline methods for both EEG emotion classification and regression tasks. The code is available at https://github.com/yi-ding-cs/EmT.
Problem

Research questions and friction points this paper is trying to address.

Enhances EEG emotion recognition using neurophysiology knowledge.
Captures long-term contextual information in emotional processes.
Improves cross-subject EEG emotion classification and regression tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer model integrates neurophysiology for EEG emotion recognition
Temporal graph construction transforms EEG signals into feature graphs
Residual multi-view pyramid GCN learns dynamic graph representations
🔎 Similar Papers
No similar papers found.
Y
Yi Ding
College of Computing and Data Science, Nanyang Technological University
C
Chengxuan Tong
College of Computing and Data Science, Nanyang Technological University
Shuailei Zhang
Shuailei Zhang
Research fellow, College of Computing and Data Science, Nanyang Technological University
Brain computer interfaceNeurorehabilitation
Muyun Jiang
Muyun Jiang
Nanyang Technological University
Y
Yong Li
College of Computing and Data Science, Nanyang Technological University
K
Kevin Lim Jun Liang
Wilmar International, Singapore
Cuntai Guan
Cuntai Guan
President's Chair Professor, CCDS, Nanyang Technological University
Brain-Computer InterfaceBrain-Computer InterfacesMachine LearningArtificial Intelligence