Representations in vision and language converge in a shared, multidimensional space of perceived similarities

📅 2025-07-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
How vision and language establish shared semantic representations remains a central challenge in cross-modal cognition, with unclear behavioral and neural manifestations. This study systematically investigates the convergence of image and text representations in multidimensional perceptual space through three complementary approaches: behavioral similarity judgments, fMRI measurements of neural responses, and mapping via large language model (LLM) embeddings. Results demonstrate high consistency between behavioral and neural similarity structures across modalities, supporting the existence of modality-invariant shared representations. The proposed LLM-based embedding mapping model significantly outperforms conventional methods in predicting both behavioral similarity patterns and distributed brain network activations. Crucially, this work achieves the first triple alignment—behavioral, neural, and computational—under naturalistic stimulus paradigms. It thus provides a unified framework for understanding human multimodal semantic representation.

Technology Category

Application Category

📝 Abstract
Humans can effortlessly describe what they see, yet establishing a shared representational format between vision and language remains a significant challenge. Emerging evidence suggests that human brain representations in both vision and language are well predicted by semantic feature spaces obtained from large language models (LLMs). This raises the possibility that sensory systems converge in their inherent ability to transform their inputs onto shared, embedding-like representational space. However, it remains unclear how such a space manifests in human behaviour. To investigate this, sixty-three participants performed behavioural similarity judgements separately on 100 natural scene images and 100 corresponding sentence captions from the Natural Scenes Dataset. We found that visual and linguistic similarity judgements not only converge at the behavioural level but also predict a remarkably similar network of fMRI brain responses evoked by viewing the natural scene images. Furthermore, computational models trained to map images onto LLM-embeddings outperformed both category-trained and AlexNet controls in explaining the behavioural similarity structure. These findings demonstrate that human visual and linguistic similarity judgements are grounded in a shared, modality-agnostic representational structure that mirrors how the visual system encodes experience. The convergence between sensory and artificial systems suggests a common capacity of how conceptual representations are formed-not as arbitrary products of first order, modality-specific input, but as structured representations that reflect the stable, relational properties of the external world.
Problem

Research questions and friction points this paper is trying to address.

Establish shared representational format between vision and language
Understand behavioral manifestation of shared semantic space
Compare human and artificial system conceptual representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-embeddings predict human brain representations
Behavioral similarity reflects shared representational space
Modality-agnostic structure mirrors visual encoding
🔎 Similar Papers
No similar papers found.
K
Katerina Marie Simkova
Department of Psychological and Brain Sciences, Dartmouth College Hanover, NH, USA
A
Adrien Doerig
Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
C
Clayton Hickey
CHBH, School of Psychology, University of Birmingham Birmingham, England, United Kingdom
Ian Charest
Ian Charest
Département de psychologie, Université de Montréal
fMRIvisual object representationsvoice perceptionface perceptionmvpa