๐ค AI Summary
This study addresses the challenge of simultaneously capturing the multimodal physiological processes underlying speech production, which involves intricate coordination among the brain, muscles, and vocal articulators. To this end, the work presents the first successful implementation of synchronized high-density electroencephalography (EEG), surface electromyography (EMG), and real-time dynamic magnetic resonance imaging (MRI). This integration is enabled by custom-designed electromagnetic compatibility hardware and a dedicated multimodal artifact suppression algorithm, effectively mitigating MRI-induced electromagnetic interference and myogenic artifacts. The resulting robust and synchronized acquisition framework establishes a high-quality, multimodal data foundation that advances the understanding of the neural mechanisms of speech and supports the development of next-generation brainโcomputer interfaces.
๐ Abstract
Speech production is a complex process spanning neural planning, motor control, muscle activation, and articulatory kinematics. While the acoustic speech signal is the most accessible product of the speech production act, it does not directly reveal its causal neurophysiological substrates. We present the first simultaneous acquisition of real-time (dynamic) MRI, EEG, and surface EMG, capturing several key aspects of the speech production chain: brain signals, muscle activations, and articulatory movements. This multimodal acquisition paradigm presents substantial technical challenges, including MRI-induced electromagnetic interference and myogenic artifacts. To mitigate these, we introduce an artifact suppression pipeline tailored to this tri-modal setting. Once fully developed, this framework is poised to offer an unprecedented window into speech neuroscience and insights leading to brain-computer interface advances.