🤖 AI Summary
Prior fMRI encoding studies predominantly employ unimodal stimuli, leaving the neural alignment mechanisms of multimodal Transformer models under multimodal naturalistic stimulation—particularly audiovisual movie clips—poorly understood. Method: We systematically compare cross-modal (e.g., CLIP-style) and joint pretraining (e.g., VideoMAE-style) multimodal Transformers in predicting fMRI responses across visual and language cortices. Using modality ablation and representational similarity analysis, we quantify differential contributions of video and audio inputs. Contribution/Results: Both model families significantly improve prediction accuracy in visual and language regions over unimodal baselines. Jointly pretrained models rely on synergistic audiovisual integration, whereas cross-modal models depend primarily on visual input. Critically, we uncover integrative neural representations in visual and language cortices that transcend unimodal embeddings—demonstrating emergent cross-modal coding not captured by single-modality features. This work establishes a novel paradigm for multimodal neural encoding modeling and provides mechanistic insights into how the brain integrates naturalistic audiovisual information.
📝 Abstract
Despite participants engaging in unimodal stimuli, such as watching images or silent videos, recent work has demonstrated that multi-modal Transformer models can predict visual brain activity impressively well, even with incongruent modality representations. This raises the question of how accurately these multi-modal models can predict brain activity when participants are engaged in multi-modal stimuli. As these models grow increasingly popular, their use in studying neural activity provides insights into how our brains respond to such multi-modal naturalistic stimuli, i.e., where it separates and integrates information across modalities through a hierarchy of early sensory regions to higher cognition. We investigate this question by using multiple unimodal and two types of multi-modal models-cross-modal and jointly pretrained-to determine which type of model is more relevant to fMRI brain activity when participants are engaged in watching movies. We observe that both types of multi-modal models show improved alignment in several language and visual regions. This study also helps in identifying which brain regions process unimodal versus multi-modal information. We further investigate the contribution of each modality to multi-modal alignment by carefully removing unimodal features one by one from multi-modal representations, and find that there is additional information beyond the unimodal embeddings that is processed in the visual and language regions. Based on this investigation, we find that while for cross-modal models, their brain alignment is partially attributed to the video modality; for jointly pretrained models, it is partially attributed to both the video and audio modalities. This serves as a strong motivation for the neuroscience community to investigate the interpretability of these models for deepening our understanding of multi-modal information processing in brain.