🤖 AI Summary
This work addresses three major challenges in electroencephalography (EEG)-based brain–computer interfaces: poor generalization of decoding models, vulnerability to adversarial attacks, and privacy leakage. To tackle these issues simultaneously, the authors propose a novel federated learning framework that preserves user privacy through localized data processing, mitigates cross-subject feature distribution shifts via local batch-specific normalization, and enhances model robustness by integrating federated adversarial training with adversarial weight perturbation. Notably, this approach achieves high decoding accuracy, strong adversarial robustness, and strict privacy guarantees without requiring calibration data from target subjects—a first in the field. Extensive experiments on five public EEG datasets demonstrate that the method significantly outperforms 14 state-of-the-art approaches and even surpasses centralized, non-private training baselines.
📝 Abstract
Electroencephalogram (EEG)-based brain-computer interfaces (BCIs) are widely adopted due to their efficiency and portability; however, their decoding algorithms still face multiple challenges, including inadequate generalization, adversarial vulnerability, and privacy leakage. This paper proposes Secure and Accurate FEderated learning (SAFE), a federated learning-based approach that protects user privacy by keeping data local during model training. SAFE employs local batch-specific normalization to mitigate cross-subject feature distribution shifts and hence improves model generalization. It further enhances adversarial robustness by introducing perturbations in both the input space and the parameter space through federated adversarial training and adversarial weight perturbation. Experiments on five EEG datasets from motor imagery (MI) and event-related potential (ERP) BCI paradigms demonstrated that SAFE consistently outperformed 14 state-of-the-art approaches in both decoding accuracy and adversarial robustness, while ensuring privacy protection. Notably, it even outperformed centralized training approaches that do not consider privacy protection at all. To our knowledge, SAFE is the first algorithm to simultaneously achieve high decoding accuracy, strong adversarial robustness, and reliable privacy protection without using any calibration data from the target subject, making it highly desirable for real-world BCIs.