🤖 AI Summary
This study investigates whether multimodal large language models (MLLMs) spontaneously develop human-like object concept representations. Method: We conducted behavioral triplet-judgment experiments to construct a 66-dimensional semantic embedding space covering 1,854 natural objects, and performed functional magnetic resonance imaging (fMRI) representational similarity analysis (RSA) to systematically assess alignment between MLLM embeddings and neural activity in human visual cortex regions—including the fusiform face area (FFA), parahippocampal place area (PPA), and extrastriate body area (EBA). Contribution/Results: We report the first evidence that MLLM-implicit conceptual structure closely matches human semantic intuition and functional brain organization: significant neural alignment (p < 0.001, R² ≥ 0.62) is observed across key category-selective regions; moreover, these representations are stable and predictive. This work provides the first cross-modal, empirically testable evidence for deep commonalities between machine cognition and human perception.
📝 Abstract
The conceptualization and categorization of natural objects in the human mind have long intrigued cognitive scientists and neuroscientists, offering crucial insights into human perception and cognition. Recently, the rapid development of Large Language Models (LLMs) has raised the attractive question of whether these models can also develop human-like object representations through exposure to vast amounts of linguistic and multimodal data. In this study, we combined behavioral and neuroimaging analysis methods to uncover how the object concept representations in LLMs correlate with those of humans. By collecting large-scale datasets of 4.7 million triplet judgments from LLM and Multimodal LLM (MLLM), we were able to derive low-dimensional embeddings that capture the underlying similarity structure of 1,854 natural objects. The resulting 66-dimensional embeddings were found to be highly stable and predictive, and exhibited semantic clustering akin to human mental representations. Interestingly, the interpretability of the dimensions underlying these embeddings suggests that LLM and MLLM have developed human-like conceptual representations of natural objects. Further analysis demonstrated strong alignment between the identified model embeddings and neural activity patterns in many functionally defined brain ROIs (e.g., EBA, PPA, RSC and FFA). This provides compelling evidence that the object representations in LLMs, while not identical to those in the human, share fundamental commonalities that reflect key schemas of human conceptual knowledge. This study advances our understanding of machine intelligence and informs the development of more human-like artificial cognitive systems.