🤖 AI Summary
Cross-subject EEG-based emotion recognition is severely hindered by substantial inter-subject variability, including neurophysiological differences and heterogeneous emotional responses. To address this, we propose the Frequency-Adaptive Dynamic Graph Transformer (FADGT), the first framework to jointly model frequency-specific EEG characteristics, subject-specific functional brain connectivity, and cross-subject temporal consistency. FADGT comprises three core modules: (i) a Frequency-Adaptive Processing (FAP) module to enhance discriminability across frequency bands; (ii) an Adaptive Dynamic Graph Learning (ADGL) module to capture subject-specific evolving brain networks; and (iii) a Multi-scale Temporal Disentanglement Network (MTDN) integrating Transformer architectures with adversarial disentanglement to isolate emotion-relevant, subject-invariant representations. Evaluated on multiple benchmark EEG datasets, FADGT achieves significant improvements in cross-subject emotion classification accuracy, demonstrating its effectiveness in suppressing subject-specific interference while strengthening shared affective features.
📝 Abstract
Electroencephalography (EEG) serves as a reliable and objective signal for emotion recognition in affective brain-computer interfaces, offering unique advantages through its high temporal resolution and ability to capture authentic emotional states that cannot be consciously controlled. However, cross-subject generalization remains a fundamental challenge due to individual variability, cognitive traits, and emotional responses. We propose FreqDGT, a frequency-adaptive dynamic graph transformer that systematically addresses these limitations through an integrated framework. FreqDGT introduces frequency-adaptive processing (FAP) to dynamically weight emotion-relevant frequency bands based on neuroscientific evidence, employs adaptive dynamic graph learning (ADGL) to learn input-specific brain connectivity patterns, and implements multi-scale temporal disentanglement network (MTDN) that combines hierarchical temporal transformers with adversarial feature disentanglement to capture both temporal dynamics and ensure cross-subject robustness. Comprehensive experiments demonstrate that FreqDGT significantly improves cross-subject emotion recognition accuracy, confirming the effectiveness of integrating frequency-adaptive, spatial-dynamic, and temporal-hierarchical modeling while ensuring robustness to individual differences. The code is available at https://github.com/NZWANG/FreqDGT.