🤖 AI Summary
This study addresses the neurodecoding challenge of directly reconstructing co-speech gestures from fMRI signals—a task hindered by the absence of paired {brain signal–speech–gesture} data. To overcome this, we propose a dual-path brain decoding alignment framework that leverages text as a semantic bridge: it jointly optimizes an fMRI-to-text decoder and a text-to-gesture generator, enabling self-supervised, unpaired cross-modal mapping from fMRI to gesture. Multimodal fusion is achieved via region-of-interest (ROI)-based feature analysis and self-supervised alignment. Experimentally, we achieve the first successful generation of expressive, temporally aligned co-speech gestures from natural speech–evoked fMRI data. Furthermore, our analyses reveal distinct functional contributions of motor, language, and default mode networks to gesture generation. This work establishes a novel paradigm for both brain–computer interfaces and the investigation of speech–gesture coupling in cognitive neuroscience.
📝 Abstract
Understanding how the brain responds to external stimuli and decoding this process has been a significant challenge in neuroscience. While previous studies typically concentrated on brain-to-image and brain-to-language reconstruction, our work strives to reconstruct gestures associated with speech stimuli perceived by brain. Unfortunately, the lack of paired {brain, speech, gesture} data hinders the deployment of deep learning models for this purpose. In this paper, we introduce a novel approach, extbf{fMRI2GES}, that allows training of fMRI-to-gesture reconstruction networks on unpaired data using extbf{Dual Brain Decoding Alignment}. This method relies on two key components: (i) observed texts that elicit brain responses, and (ii) textual descriptions associated with the gestures. Then, instead of training models in a completely supervised manner to find a mapping relationship among the three modalities, we harness an fMRI-to-text model, a text-to-gesture model with paired data and an fMRI-to-gesture model with unpaired data, establishing dual fMRI-to-gesture reconstruction patterns. Afterward, we explicitly align two outputs and train our model in a self-supervision way. We show that our proposed method can reconstruct expressive gestures directly from fMRI recordings. We also investigate fMRI signals from different ROIs in the cortex and how they affect generation results. Overall, we provide new insights into decoding co-speech gestures, thereby advancing our understanding of neuroscience and cognitive science.