🤖 AI Summary
This study addresses the challenge of accurately decoding hand movement trajectories from electroencephalography (EEG) signals to control robotic systems, with a focus on maintaining robustness in cross-subject scenarios. The authors propose a hybrid model integrating convolutional neural networks (CNNs) with an attention mechanism, further extended to multimodal EEG–electromyography (EMG) decoding. A novel Copilot post-processing framework, grounded in a finite-state machine, is introduced to dynamically filter low-confidence trajectory points using a motion-state-aware evaluator. This approach preserves over 80% of the original data while substantially enhancing trajectory fidelity. Intra-subject decoding achieves Pearson correlation coefficients (PCC) of 0.9854, 0.9946, and 0.9065 along the X, Y, and Z axes, respectively; after Copilot refinement, the overall PCC improves to 0.93, enabling successful execution of grasping tasks by a Franka Panda robotic arm.
📝 Abstract
Motor kinematics prediction (MKP) from electroencephalography (EEG) is an important research area for developing movement-related brain-computer interfaces (BCIs). While traditional methods often rely on convolutional neural networks (CNNs) or recurrent neural networks (RNNs), Transformer-based models have shown strong ability in modeling long sequential EEG data. In this study, we propose a CNN-attention hybrid model for decoding hand kinematics from EEG during grasp-and-lift tasks, achieving strong performance in within-subject experiments. We further extend this approach to EEG-EMG multimodal decoding, which yields substantially improved results. Within-subject tests achieve PCC values of 0.9854, 0.9946, and 0.9065 for the X, Y, and Z axes, respectively, computed on the midpoint trajectory between the thumb and index finger, while cross-subject tests result in 0.9643, 0.9795, and 0.5852. The decoded trajectories from both modalities are then used to control a Franka Panda robotic arm in a MuJoCo simulation. To enhance trajectory fidelity, we introduce a copilot framework that filters low-confidence decoded points using a motion-state-aware critic within a finite-state machine. This post-processing step improves the overall within-subject PCC of EEG-only decoding to 0.93 while excluding fewer than 20% of the data points.