Do Large Language Models Think Like the Brain? Sentence-Level Evidence from fMRI and Hierarchical Embeddings

📅 2025-05-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Understanding how large language models (LLMs) align with human neural representations during sentence comprehension remains an open question in cognitive neuroscience and AI. Method: We constructed sentence-level neural predictive models by correlating hierarchical embeddings from 14 publicly available LLMs with naturalistic fMRI data, systematically analyzing cross-modal representational alignment. Contribution/Results: We provide the first evidence that improvements in LLM performance—not merely scale—drive internal representational structures toward brain-like hierarchies. Specifically, intermediate-to-high-level representations—particularly those capturing semantic abstraction—exhibit significant correspondence with activation in the left-lateralized language cortex (r > 0.45, p < 0.001). Crucially, both functional and anatomical alignment strength increases monotonically with model capability. These findings offer causal evidence for brain-like mechanisms in LLMs and establish representational hierarchy evolution as a critical bridge linking AI architectures to cognitive neuroscience.

Technology Category

Application Category

📝 Abstract
Understanding whether large language models (LLMs) and the human brain converge on similar computational principles remains a fundamental and important question in cognitive neuroscience and AI. Do the brain-like patterns observed in LLMs emerge simply from scaling, or do they reflect deeper alignment with the architecture of human language processing? This study focuses on the sentence-level neural mechanisms of language models, systematically investigating how hierarchical representations in LLMs align with the dynamic neural responses during human sentence comprehension. By comparing hierarchical embeddings from 14 publicly available LLMs with fMRI data collected from participants, who were exposed to a naturalistic narrative story, we constructed sentence-level neural prediction models to precisely identify the model layers most significantly correlated with brain region activations. Results show that improvements in model performance drive the evolution of representational architectures toward brain-like hierarchies, particularly achieving stronger functional and anatomical correspondence at higher semantic abstraction levels.
Problem

Research questions and friction points this paper is trying to address.

Do LLMs and human brains share similar computational principles?
Are brain-like patterns in LLMs due to scaling or deeper alignment?
How do hierarchical LLM representations align with human sentence processing?
Innovation

Methods, ideas, or system contributions that make the work stand out.

Compare hierarchical embeddings with fMRI data
Construct sentence-level neural prediction models
Identify model layers correlated with brain activations
🔎 Similar Papers
No similar papers found.
Y
Yu Lei
Beijing University of Posts and Telecommunications
X
Xingyang Ge
Shandong University
Y
Yi Zhang
FAU Erlangen-Nuremberg
Y
Yiming Yang
Linguistic Science Laboratory, Jiangsu Normal University; Collaborative Innovation Center for Language Ability, Jiangsu Normal University; School of Linguistic Sciences and Arts, Jiangsu Normal University
Bolei Ma
Bolei Ma
LMU Munich
LinguisticsNatural Language ProcessingComputational Social Science