Language models align with brain regions that represent concepts across modalities

📅 2025-08-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the dissociation between linguistic form representations and conceptual semantic representations in language models, and examines their neural alignment with cross-modal conceptual processing regions in the human brain. Method: We propose a novel metric—“cross-modal semantic consistency”—to quantify the fMRI response consistency across brain regions activated by the same concept presented in three modalities: sentences, word clouds, and images. Leveraging both unimodal language models (LMs) and language-vision multimodal models (LVMs), we systematically evaluate how well their internal representations predict activation patterns in brain regions exhibiting high cross-modal consistency. Results: Both LMs and LVMs significantly outperform language-only task baselines in predicting neural responses in non-linguistically specialized regions—such as the anterior temporal lobe and angular gyrus—demonstrating that these models implicitly encode cross-modal conceptual knowledge. This finding provides novel neuroscientific evidence for the semantic nature of large language models and advances the development of brain–machine semantic interfaces.

Technology Category

Application Category

📝 Abstract
Cognitive science and neuroscience have long faced the challenge of disentangling representations of language from representations of conceptual meaning. As the same problem arises in today's language models (LMs), we investigate the relationship between LM--brain alignment and two neural metrics: (1) the level of brain activation during processing of sentences, targeting linguistic processing, and (2) a novel measure of meaning consistency across input modalities, which quantifies how consistently a brain region responds to the same concept across paradigms (sentence, word cloud, image) using an fMRI dataset (Pereira et al., 2018). Our experiments show that both language-only and language-vision models predict the signal better in more meaning-consistent areas of the brain, even when these areas are not strongly sensitive to language processing, suggesting that LMs might internally represent cross-modal conceptual meaning.
Problem

Research questions and friction points this paper is trying to address.

Study LM-brain alignment for conceptual meaning representation
Measure meaning consistency across input modalities in brain
Assess if LMs internally represent cross-modal concepts
Innovation

Methods, ideas, or system contributions that make the work stand out.

LMs align with brain regions representing concepts
Novel fMRI measure for cross-modal meaning consistency
Language-vision models predict brain signal accurately
🔎 Similar Papers
No similar papers found.