Source-free domain adaptation for SSVEP-based brain-computer interfaces.

📅 2023-05-27
🏛️ Journal of Neural Engineering
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the lengthy calibration requirement and poor user experience for novice users in SSVEP-based BCI speller systems, this paper proposes a source-free single-sample unsupervised domain adaptation (SFDA) framework. Methodologically, it leverages only a small amount of unlabeled EEG data from the target user, integrating pseudo-labeling adaptation with manifold-structure-aware local regularization to enable efficient transfer of a pre-trained deep network to new users. Key contributions include: (i) the first SFDA paradigm that operates without access to or assumptions about any source-domain data; and (ii) a discriminative manifold constraint that enhances cross-subject feature consistency. Evaluated on the Benchmark and BETA datasets, the method achieves information transfer rates of 201.15 and 145.02 bits/min, respectively—substantially outperforming state-of-the-art approaches—while completely eliminating the need for subject-specific calibration.
📝 Abstract
OBJECTIVE SSVEP-based BCI spellers assist individuals experiencing speech difficulties by enabling them to communicate at a fast rate. However, achieving a high information transfer rate (ITR) in most prominent methods requires an extensive calibration period before using the system, leading to discomfort for new users. We address this issue by proposing a novel method that adapts a powerful deep neural network (DNN) pre-trained on data from source domains (data from former users or participants of previous experiments) to the new user (target domain), based only on the unlabeled target data. APPROACH Our method adapts the pre-trained DNN to the new user by minimizing our proposed custom loss function composed of self-adaptation and local-regularity terms. The self-adaptation term uses the pseudo-label strategy, while the novel local-regularity term exploits the data structure and forces the DNN to assign similar labels to adjacent instances. MAIN RESULTS Our method achieves excellent 201.15 bits/min and 145.02 bits/min ITRs on the benchmark and BETA datasets, respectively, and outperforms the state-of-the-art alternatives. Our code is available at https://github.com/osmanberke/SFDA-SSVEP-BCI Significance: The proposed method priorities user comfort by removing the burden of calibration while maintaining an excellent character identification accuracy and ITR. Because of these attributes, our approach could significantly accelerate the adoption of BCI systems into everyday life.
Problem

Research questions and friction points this paper is trying to address.

Eliminating calibration burden for new BCI users
Adapting pre-trained models using unlabeled target data
Maintaining high information transfer rates without calibration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapts pre-trained DNN using unlabeled target data
Minimizes custom loss with pseudo-label and local-regularity terms
Forces similar labels for adjacent data instances
🔎 Similar Papers
No similar papers found.