🤖 AI Summary
To address poor generalization of classification models, high cross-paradigm adaptation costs, and deployment complexity in brain–computer interfaces (BCIs), this paper proposes a lightweight unified decoding framework—the first to enable end-to-end joint recognition of three mainstream BCI paradigms: motor imagery (MI), steady-state visual evoked potentials (SSVEP), and P300. Methodologically, we design a spatiotemporal convolution module coupled with a multi-scale local feature selection mechanism to extract paradigm-shared representations, and introduce multi-dimensional global feature fusion with dynamic weighting to enhance generalization. The model contains only 1.2 million parameters—substantially fewer than comparable deep models. Evaluated on a mixed-paradigm dataset, it achieves 88.39% accuracy and a macro-F1 score of 0.8092, significantly outperforming existing single-paradigm and cross-paradigm baseline methods. These results validate the framework’s effectiveness, universality, and practical deployability.
📝 Abstract
Classification models used in brain-computer interface (BCI) are usually designed for a single BCI paradigm. This requires the redevelopment of the model when applying it to a new BCI paradigm, resulting in repeated costs and effort. Moreover, less complex deep learning models are desired for practical usage, as well as for deployment on portable devices. In or-der to fill the above gaps, we, in this study, proposed a light-weight and unified decoding model for cross-BCI-paradigm classification. The proposed model starts with a tempo-spatial convolution. It is followed by a multi-scale local feature selec-tion module, aiming to extract local features shared across BCI paradigms and generate weighted features. Finally, a mul-ti-dimensional global feature extraction module is designed, in which multi-dimensional global features are extracted from the weighted features and fused with the weighted features to form high-level feature representations associated with BCI para-digms. The results, evaluated on a mixture of three classical BCI paradigms (i.e., MI, SSVEP, and P300), demon-strate that the proposed model achieves 88.39%, 82.36%, 80.01%, and 0.8092 for accuracy, macro-precision, mac-ro-recall, and macro-F1-score, respectively, significantly out-performing the compared models. This study pro-vides a feasible solution for cross-BCI-paradigm classifica-tion. It lays a technological foundation for de-veloping a new generation of unified decoding systems, paving the way for low-cost and universal practical applications.