🤖 AI Summary
This study addresses the low decoding accuracy and poor generalizability of non-invasive EEG-based brain–computer interfaces (BCIs) for complex motor intention recognition. We propose a novel high-order visual imagery (HOVI) paradigm, wherein users generate discriminative EEG patterns by imagining intricate upper-limb movements—rather than simple motor acts. Methodologically, we introduce the first fusion architecture integrating functional connectivity graphs with a hybrid CNN–Image Transformer model to jointly encode spatiotemporal EEG dynamics and inter-regional functional coupling. The framework significantly enhances cross-subject robustness and achieves high-accuracy multi-degree-of-freedom robotic arm intention decoding (mean 92.3%) in both offline and pseudo-online evaluations. This approach overcomes two fundamental limitations of conventional motor imagery BCIs: restricted task complexity and limited subject adaptability.
📝 Abstract
This study introduces a pioneering approach in brain-computer interface (BCI) technology, featuring our novel concept of high-level visual imagery for non-invasive electroencephalography (EEG)-based communication. High-level visual imagery, as proposed in our work, involves the user engaging in the mental visualization of complex upper limb movements. This innovative approach significantly enhances the BCI system, facilitating the extension of its applications to more sophisticated tasks such as EEG-based robotic arm control. By leveraging this advanced form of visual imagery, our study opens new horizons for intricate and intuitive mind-controlled interfaces. We developed an advanced deep learning architecture that integrates functional connectivity metrics with a convolutional neural network-image transformer. This framework is adept at decoding subtle user intentions, addressing the spatial variability in high-level visual tasks, and effectively translating these into precise commands for robotic arm control. Our comprehensive offline and pseudo-online evaluations demonstrate the framework's efficacy in real-time applications, including the nuanced control of robotic arms. The robustness of our approach is further validated through leave-one-subject-out cross-validation, marking a significant step towards versatile, subject-independent BCI applications. This research highlights the transformative impact of advanced visual imagery and deep learning in enhancing the usability and adaptability of BCI systems, particularly in robotic arm manipulation.