Vectors from Larger Language Models Predict Human Reading Time and fMRI Data More Poorly when Dimensionality Expansion is Controlled

📅 2025-05-18
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
This study identifies a paradox in large language models (LLMs): under strict vector-dimension control, larger LLMs exhibit significantly *worse* predictive performance for human language processing—specifically reading times and fMRI neural responses—revealing an “inverse scaling” phenomenon. To isolate model-scale effects from dimensionality confounds, we align representations across models via principal component analysis and random projection, then evaluate predictive validity of multi-layer LLM hidden states using linear encoding models. Results show that, after dimension normalization, the largest models explain up to 37% less variance in reading times and fMRI responses compared to mid-sized models. This work provides the first systematic empirical evidence that LLM architectures are fundamentally misaligned with the human brain’s syntactic–semantic integration mechanisms—and that this misalignment intensifies with scale. It directly challenges the “bigger is better” assumption in cognitive alignment research, establishing a critical empirical benchmark and prompting theoretical reevaluation of how LLMs approximate human language processing.

Technology Category

Application Category

📝 Abstract
The impressive linguistic abilities of large language models (LLMs) have recommended them as models of human sentence processing, with some conjecturing a positive 'quality-power' relationship (Wilcox et al., 2023), in which language models' (LMs') fit to psychometric data continues to improve as their ability to predict words in context increases. This is important because it suggests that elements of LLM architecture, such as veridical attention to context and a unique objective of predicting upcoming words, reflect the architecture of the human sentence processing faculty, and that any inadequacies in predicting human reading time and brain imaging data may be attributed to insufficient model complexity, which recedes as larger models become available. Recent studies (Oh and Schuler, 2023) have shown this scaling inverts after a point, as LMs become excessively large and accurate, when word prediction probability (as information-theoretic surprisal) is used as a predictor. Other studies propose the use of entire vectors from differently sized LLMs, still showing positive scaling (Schrimpf et al., 2021), casting doubt on the value of surprisal as a predictor, but do not control for the larger number of predictors in vectors from larger LMs. This study evaluates LLM scaling using entire LLM vectors, while controlling for the larger number of predictors in vectors from larger LLMs. Results show that inverse scaling obtains, suggesting that inadequacies in predicting human reading time and brain imaging data may be due to substantial misalignment between LLMs and human sentence processing, which worsens as larger models are used.
Problem

Research questions and friction points this paper is trying to address.

Assess LLM vector scaling impact on human reading time prediction
Control dimensionality expansion in LLM vectors for fMRI data
Investigate misalignment between LLMs and human sentence processing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Controls dimensionality expansion in LLM vectors
Uses entire vectors from differently sized LLMs
Evaluates inverse scaling in LLM-human alignment
🔎 Similar Papers
No similar papers found.