SAFE: Secure and Accurate Federated Learning for Privacy-Preserving Brain-Computer Interfaces

📅 2026-01-09
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses three major challenges in electroencephalography (EEG)-based brain–computer interfaces: poor generalization of decoding models, vulnerability to adversarial attacks, and privacy leakage. To tackle these issues simultaneously, the authors propose a novel federated learning framework that preserves user privacy through localized data processing, mitigates cross-subject feature distribution shifts via local batch-specific normalization, and enhances model robustness by integrating federated adversarial training with adversarial weight perturbation. Notably, this approach achieves high decoding accuracy, strong adversarial robustness, and strict privacy guarantees without requiring calibration data from target subjects—a first in the field. Extensive experiments on five public EEG datasets demonstrate that the method significantly outperforms 14 state-of-the-art approaches and even surpasses centralized, non-private training baselines.

Technology Category

Application Category

📝 Abstract
Electroencephalogram (EEG)-based brain-computer interfaces (BCIs) are widely adopted due to their efficiency and portability; however, their decoding algorithms still face multiple challenges, including inadequate generalization, adversarial vulnerability, and privacy leakage. This paper proposes Secure and Accurate FEderated learning (SAFE), a federated learning-based approach that protects user privacy by keeping data local during model training. SAFE employs local batch-specific normalization to mitigate cross-subject feature distribution shifts and hence improves model generalization. It further enhances adversarial robustness by introducing perturbations in both the input space and the parameter space through federated adversarial training and adversarial weight perturbation. Experiments on five EEG datasets from motor imagery (MI) and event-related potential (ERP) BCI paradigms demonstrated that SAFE consistently outperformed 14 state-of-the-art approaches in both decoding accuracy and adversarial robustness, while ensuring privacy protection. Notably, it even outperformed centralized training approaches that do not consider privacy protection at all. To our knowledge, SAFE is the first algorithm to simultaneously achieve high decoding accuracy, strong adversarial robustness, and reliable privacy protection without using any calibration data from the target subject, making it highly desirable for real-world BCIs.
Problem

Research questions and friction points this paper is trying to address.

brain-computer interfaces
EEG
privacy leakage
adversarial vulnerability
generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated Learning
Adversarial Robustness
Privacy-Preserving BCI
Batch-Specific Normalization
EEG Decoding
🔎 Similar Papers
No similar papers found.
T
Tianwang Jia
Key Laboratory of the Ministry of Education for Image Processing and Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China
Xiaoqing Chen
Xiaoqing Chen
Huazhong university of science and technology
Deep LearningBrain-Computer Interface
D
Dongrui Wu
Key Laboratory of the Ministry of Education for Image Processing and Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China; Zhongguancun Academy, Beijing, 100084 China