FreqDGT: Frequency-Adaptive Dynamic Graph Networks with Transformer for Cross-subject EEG Emotion Recognition

📅 2025-06-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Cross-subject EEG-based emotion recognition is severely hindered by substantial inter-subject variability, including neurophysiological differences and heterogeneous emotional responses. To address this, we propose the Frequency-Adaptive Dynamic Graph Transformer (FADGT), the first framework to jointly model frequency-specific EEG characteristics, subject-specific functional brain connectivity, and cross-subject temporal consistency. FADGT comprises three core modules: (i) a Frequency-Adaptive Processing (FAP) module to enhance discriminability across frequency bands; (ii) an Adaptive Dynamic Graph Learning (ADGL) module to capture subject-specific evolving brain networks; and (iii) a Multi-scale Temporal Disentanglement Network (MTDN) integrating Transformer architectures with adversarial disentanglement to isolate emotion-relevant, subject-invariant representations. Evaluated on multiple benchmark EEG datasets, FADGT achieves significant improvements in cross-subject emotion classification accuracy, demonstrating its effectiveness in suppressing subject-specific interference while strengthening shared affective features.

Technology Category

Application Category

📝 Abstract
Electroencephalography (EEG) serves as a reliable and objective signal for emotion recognition in affective brain-computer interfaces, offering unique advantages through its high temporal resolution and ability to capture authentic emotional states that cannot be consciously controlled. However, cross-subject generalization remains a fundamental challenge due to individual variability, cognitive traits, and emotional responses. We propose FreqDGT, a frequency-adaptive dynamic graph transformer that systematically addresses these limitations through an integrated framework. FreqDGT introduces frequency-adaptive processing (FAP) to dynamically weight emotion-relevant frequency bands based on neuroscientific evidence, employs adaptive dynamic graph learning (ADGL) to learn input-specific brain connectivity patterns, and implements multi-scale temporal disentanglement network (MTDN) that combines hierarchical temporal transformers with adversarial feature disentanglement to capture both temporal dynamics and ensure cross-subject robustness. Comprehensive experiments demonstrate that FreqDGT significantly improves cross-subject emotion recognition accuracy, confirming the effectiveness of integrating frequency-adaptive, spatial-dynamic, and temporal-hierarchical modeling while ensuring robustness to individual differences. The code is available at https://github.com/NZWANG/FreqDGT.
Problem

Research questions and friction points this paper is trying to address.

Address cross-subject EEG emotion recognition challenges
Dynamically weight emotion-relevant EEG frequency bands
Learn adaptive brain connectivity patterns for robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Frequency-adaptive processing for dynamic band weighting
Adaptive dynamic graph learning for brain connectivity
Multi-scale temporal disentanglement with transformers
🔎 Similar Papers
No similar papers found.
Y
Yueyang Li
Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Hong Kong SAR, China
S
Shengyu Gong
School of Information Engineering, Shanghai Maritime University, Shanghai, China
W
Weiming Zeng
School of Information Engineering, Shanghai Maritime University, Shanghai, China
Nizhuan Wang
Nizhuan Wang
The Hong Kong Polytechnic University (PolyU)
AIBrain-Computer InterfaceNeuroimagingComputational LinguisticsNeurolinguistics
Wai Ting Siok
Wai Ting Siok
The Hong Kong Polytechnic University
Reading developmentChinese readingDevelopmental dyslexiaNeuroimagingfMRI