Modeling the language cortex with form-independent and enriched representations of sentence meaning reveals remarkable semantic abstractness

📅 2025-09-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether the language cortex encodes form-invariant, highly abstract sentence-level semantic representations—addressing the fundamental question of whether human language understanding depends on syntactic form. Method: We propose a novel neural response prediction model integrating multimodal (vision–language) embeddings, cross-image aggregation, sentence paraphrase integration, and implicit contextual enhancement. Crucially, we empirically evaluate abstraction via image generation tasks and context extension experiments. Contribution/Results: Our model achieves significantly improved fMRI response prediction accuracy—matching or surpassing large language models (LLMs) in several language-selective regions. Critically, it demonstrates stronger generalization across semantically equivalent but syntactically divergent constructions than state-of-the-art LLMs, providing direct neural evidence for abstract, syntax-agnostic semantic coding in the human language cortex. These findings reveal a distinctive advantage of human semantic representations over AI models in both richness and abstraction.

Technology Category

Application Category

📝 Abstract
The human language system represents both linguistic forms and meanings, but the abstractness of the meaning representations remains debated. Here, we searched for abstract representations of meaning in the language cortex by modeling neural responses to sentences using representations from vision and language models. When we generate images corresponding to sentences and extract vision model embeddings, we find that aggregating across multiple generated images yields increasingly accurate predictions of language cortex responses, sometimes rivaling large language models. Similarly, averaging embeddings across multiple paraphrases of a sentence improves prediction accuracy compared to any single paraphrase. Enriching paraphrases with contextual details that may be implicit (e.g., augmenting "I had a pancake" to include details like "maple syrup") further increases prediction accuracy, even surpassing predictions based on the embedding of the original sentence, suggesting that the language system maintains richer and broader semantic representations than language models. Together, these results demonstrate the existence of highly abstract, form-independent meaning representations within the language cortex.
Problem

Research questions and friction points this paper is trying to address.

Modeling abstract meaning representations in language cortex
Comparing vision and language models for neural prediction
Enriching semantic representations beyond linguistic forms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using vision model embeddings from generated images
Averaging embeddings across multiple sentence paraphrases
Enriching paraphrases with contextual implicit details
🔎 Similar Papers
No similar papers found.
S
Shreya Saha
University of California San Diego
S
Shurui Li
ShanghaiTech University
Greta Tuckute
Greta Tuckute
Post-doc, Brain and Cognitive Sciences, MIT
Cognitive neuroscienceartificial intelligence
Y
Yuanning Li
ShanghaiTech University
R
Ru-Yuan Zhang
Shanghai Jiao Tong University
Leila Wehbe
Leila Wehbe
Associate Professor, Carnegie Mellon University
Computational Cognitive NeuroscienceNeuroAIMachine Learning for Science
E
Evelina Fedorenko
Massachusetts Institute of Technology
Meenakshi Khosla
Meenakshi Khosla
UC San Diego
Computational NeuroscienceArtificial IntelligenceVisionAuditionLanguage