🤖 AI Summary
To address the challenges of prolonged calibration time, cognitive fatigue, and poor word-boundary detection in implicit speech (imagined speech) brain–computer interfaces, this paper proposes a lightweight calibration method based on cross-paradigm feature transfer from explicit speech EEG. For the first time, Hilbert envelope and temporal fine structure features extracted from explicit speech EEG are transferred to implicit speech decoding. These features are integrated with a bidirectional long short-term memory (BiLSTM) network to model temporal dynamics. The approach significantly reduces user calibration burden, mitigates cognitive load induced by repetitive mental imagery, and enhances robustness in word-boundary discrimination. Experimental results show classification accuracies of 86.44% for explicit speech and 79.82% for implicit speech—setting a new state-of-the-art for implicit speech EEG decoding. These findings validate the effectiveness and practicality of cross-paradigm feature transfer in neural speech decoding.
📝 Abstract
Brain-Computer Interfaces (BCIs) can decode imagined speech from neural activity. However, these systems typically require extensive training sessions where participants imaginedly repeat words, leading to mental fatigue and difficulties identifying the onset of words, especially when imagining sequences of words. This paper addresses these challenges by transferring a classifier trained in overt speech data to covert speech classification. We used electroencephalogram (EEG) features derived from the Hilbert envelope and temporal fine structure, and used them to train a bidirectional long-short-term memory (BiLSTM) model for classification. Our method reduces the burden of extensive training and achieves state-of-the-art classification accuracy: 86.44% for overt speech and 79.82% for covert speech using the overt speech classifier.