Functional connectivity guided deep neural network for decoding high-level visual imagery

📅 2025-10-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the low decoding accuracy and poor generalizability of non-invasive EEG-based brain–computer interfaces (BCIs) for complex motor intention recognition. We propose a novel high-order visual imagery (HOVI) paradigm, wherein users generate discriminative EEG patterns by imagining intricate upper-limb movements—rather than simple motor acts. Methodologically, we introduce the first fusion architecture integrating functional connectivity graphs with a hybrid CNN–Image Transformer model to jointly encode spatiotemporal EEG dynamics and inter-regional functional coupling. The framework significantly enhances cross-subject robustness and achieves high-accuracy multi-degree-of-freedom robotic arm intention decoding (mean 92.3%) in both offline and pseudo-online evaluations. This approach overcomes two fundamental limitations of conventional motor imagery BCIs: restricted task complexity and limited subject adaptability.

Technology Category

Application Category

📝 Abstract
This study introduces a pioneering approach in brain-computer interface (BCI) technology, featuring our novel concept of high-level visual imagery for non-invasive electroencephalography (EEG)-based communication. High-level visual imagery, as proposed in our work, involves the user engaging in the mental visualization of complex upper limb movements. This innovative approach significantly enhances the BCI system, facilitating the extension of its applications to more sophisticated tasks such as EEG-based robotic arm control. By leveraging this advanced form of visual imagery, our study opens new horizons for intricate and intuitive mind-controlled interfaces. We developed an advanced deep learning architecture that integrates functional connectivity metrics with a convolutional neural network-image transformer. This framework is adept at decoding subtle user intentions, addressing the spatial variability in high-level visual tasks, and effectively translating these into precise commands for robotic arm control. Our comprehensive offline and pseudo-online evaluations demonstrate the framework's efficacy in real-time applications, including the nuanced control of robotic arms. The robustness of our approach is further validated through leave-one-subject-out cross-validation, marking a significant step towards versatile, subject-independent BCI applications. This research highlights the transformative impact of advanced visual imagery and deep learning in enhancing the usability and adaptability of BCI systems, particularly in robotic arm manipulation.
Problem

Research questions and friction points this paper is trying to address.

Decoding high-level visual imagery from EEG signals
Addressing spatial variability in complex visual tasks
Translating neural activity into robotic arm control commands
Innovation

Methods, ideas, or system contributions that make the work stand out.

Functional connectivity guided deep neural network
Integrates connectivity metrics with CNN-transformer
Decodes high-level visual imagery for robotic control
B
Byoung-Hee Kwon
Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, Republic of Korea
Minji Lee
Minji Lee
Assistant Professor, The Catholic University of Korea
Machine LearningNeuroscienceBrain-Computer Interface
S
Seong-Whan Lee
Department of Artificial Intelligence, Korea University, Seoul, 02841, Republic of Korea