🤖 AI Summary
This study investigates whether the language cortex encodes form-invariant, highly abstract sentence-level semantic representations—addressing the fundamental question of whether human language understanding depends on syntactic form. Method: We propose a novel neural response prediction model integrating multimodal (vision–language) embeddings, cross-image aggregation, sentence paraphrase integration, and implicit contextual enhancement. Crucially, we empirically evaluate abstraction via image generation tasks and context extension experiments. Contribution/Results: Our model achieves significantly improved fMRI response prediction accuracy—matching or surpassing large language models (LLMs) in several language-selective regions. Critically, it demonstrates stronger generalization across semantically equivalent but syntactically divergent constructions than state-of-the-art LLMs, providing direct neural evidence for abstract, syntax-agnostic semantic coding in the human language cortex. These findings reveal a distinctive advantage of human semantic representations over AI models in both richness and abstraction.
📝 Abstract
The human language system represents both linguistic forms and meanings, but the abstractness of the meaning representations remains debated. Here, we searched for abstract representations of meaning in the language cortex by modeling neural responses to sentences using representations from vision and language models. When we generate images corresponding to sentences and extract vision model embeddings, we find that aggregating across multiple generated images yields increasingly accurate predictions of language cortex responses, sometimes rivaling large language models. Similarly, averaging embeddings across multiple paraphrases of a sentence improves prediction accuracy compared to any single paraphrase. Enriching paraphrases with contextual details that may be implicit (e.g., augmenting "I had a pancake" to include details like "maple syrup") further increases prediction accuracy, even surpassing predictions based on the embedding of the original sentence, suggesting that the language system maintains richer and broader semantic representations than language models. Together, these results demonstrate the existence of highly abstract, form-independent meaning representations within the language cortex.