🤖 AI Summary
This study addresses the scarcity of brain activity data with directly aligned affective labels, which has hindered research on emotion decoding from neural signals. To overcome this limitation, we propose a novel approach that leverages a pretrained text-based sentiment model to automatically generate affective labels for magnetoencephalography (MEG) recordings collected while participants listened to audiobooks. By employing forced alignment between speech and transcript, these sentiment labels are precisely mapped to their corresponding neural timepoints, yielding the first affective MEG dataset constructed without manual annotation. Using this dataset, we train a Brain-to-Sentiment decoding model that achieves higher balanced accuracy than baseline methods, demonstrating the feasibility and effectiveness of our paradigm as an end-to-end pipeline for emotion decoding from neurophysiological data.
📝 Abstract
Decoding emotion from brain activity could unlock a deeper understanding of the human experience. While a number of existing datasets align brain data with speech and with speech transcripts, no datasets have annotated brain data with sentiment. To bridge this gap, we explore the use of pre-trained Text-to-Sentiment models to annotate non invasive brain recordings, acquired using magnetoencephalography (MEG), while participants listened to audiobooks. Having annotated the text, we employ force-alignment of the text and audio to align our sentiment labels with the brain recordings. It is straightforward then to train Brainto-Sentiment models on these data. Experimental results show an improvement in balanced accuracy for Brain-to-Sentiment compared to baseline, supporting the proposed approach as a proof-of-concept for leveraging existing MEG datasets and learning to decode sentiment directly from the brain.