🤖 AI Summary
Cross-subject SSVEP decoding is hindered by substantial inter-subject variability and the high cost of labeled data. To address this, this work proposes a self-training-driven domain adaptation framework that integrates filter bank Euclidean alignment (FBEA) and adversarial learning to align source and target domain distributions. A dual-ensemble self-training (DEST) mechanism is introduced to enhance pseudo-label quality, while time-frequency augmented contrastive learning (TFA-CL) is employed to improve feature discriminability. The proposed method achieves state-of-the-art cross-subject classification performance on both the Benchmark and BETA datasets and demonstrates robustness across varying signal lengths.
📝 Abstract
Steady-state visually evoked potentials (SSVEP)-based brain-computer interfaces (BCIs) are widely used due to their high signal-to-noise ratio and user-friendliness. Accurate decoding of SSVEP signals is crucial for interpreting user intentions in BCI applications. However, signal variability across subjects and the costly user-specific annotation limit recognition performance. Therefore, we propose a novel cross-subject domain adaptation method built upon the self-training paradigm. Specifically, a Filter-Bank Euclidean Alignment (FBEA) strategy is designed to exploit frequency information from SSVEP filter banks. Then, we propose a Cross-Subject Self-Training (CSST) framework consisting of two stages: Pre-Training with Adversarial Learning (PTAL), which aligns the source and target distributions, and Dual-Ensemble Self-Training (DEST), which refines pseudo-label quality. Moreover, we introduce a Time-Frequency Augmented Contrastive Learning (TFA-CL) module to enhance feature discriminability across multiple augmented views. Extensive experiments on the Benchmark and BETA datasets demonstrate that our approach achieves state-of-the-art performance across varying signal lengths, highlighting its superiority.