Experiential Semantic Information and Brain Alignment: Are Multimodal Models Better than Language Models?

📅 2025-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
It remains unclear whether multimodal models (e.g., CLIP) better capture experiential semantics and align with human brain fMRI responses compared to unimodal language models. Method: We jointly modeled word representations from both multimodal and large language models against experiential semantic norms and high-resolution fMRI data, conducting cross-modal neural alignment evaluation. Contribution/Results: Contrary to prevailing assumptions, we provide the first empirical evidence that large language models significantly outperform multimodal models in both experiential semantic fidelity and fMRI response prediction accuracy. Their learned representations not only better reflect human experiential cognitive structure but also encode unique semantic dimensions—orthogonal to classical experiential models—yet highly predictive of neural activity. These findings challenge the “multimodality-is-inherently-superior” hypothesis and reveal latent, deep experiential semantic capabilities in language models, offering novel neurocognitive evidence for the cognitive plausibility of linguistic representations.

Technology Category

Application Category

📝 Abstract
A common assumption in Computational Linguistics is that text representations learnt by multimodal models are richer and more human-like than those by language-only models, as they are grounded in images or audio -- similar to how human language is grounded in real-world experiences. However, empirical studies checking whether this is true are largely lacking. We address this gap by comparing word representations from contrastive multimodal models vs. language-only ones in the extent to which they capture experiential information -- as defined by an existing norm-based 'experiential model' -- and align with human fMRI responses. Our results indicate that, surprisingly, language-only models are superior to multimodal ones in both respects. Additionally, they learn more unique brain-relevant semantic information beyond that shared with the experiential model. Overall, our study highlights the need to develop computational models that better integrate the complementary semantic information provided by multimodal data sources.
Problem

Research questions and friction points this paper is trying to address.

Compare multimodal vs language models' experiential information capture
Assess model alignment with human fMRI responses
Evaluate unique brain-relevant semantic information in models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compare multimodal vs language-only models
Evaluate experiential semantic information capture
Assess brain alignment via fMRI responses
🔎 Similar Papers
No similar papers found.