đ¤ AI Summary
This study identifies a paradox in large language models (LLMs): under strict vector-dimension control, larger LLMs exhibit significantly *worse* predictive performance for human language processingâspecifically reading times and fMRI neural responsesârevealing an âinverse scalingâ phenomenon. To isolate model-scale effects from dimensionality confounds, we align representations across models via principal component analysis and random projection, then evaluate predictive validity of multi-layer LLM hidden states using linear encoding models. Results show that, after dimension normalization, the largest models explain up to 37% less variance in reading times and fMRI responses compared to mid-sized models. This work provides the first systematic empirical evidence that LLM architectures are fundamentally misaligned with the human brainâs syntacticâsemantic integration mechanismsâand that this misalignment intensifies with scale. It directly challenges the âbigger is betterâ assumption in cognitive alignment research, establishing a critical empirical benchmark and prompting theoretical reevaluation of how LLMs approximate human language processing.
đ Abstract
The impressive linguistic abilities of large language models (LLMs) have recommended them as models of human sentence processing, with some conjecturing a positive 'quality-power' relationship (Wilcox et al., 2023), in which language models' (LMs') fit to psychometric data continues to improve as their ability to predict words in context increases. This is important because it suggests that elements of LLM architecture, such as veridical attention to context and a unique objective of predicting upcoming words, reflect the architecture of the human sentence processing faculty, and that any inadequacies in predicting human reading time and brain imaging data may be attributed to insufficient model complexity, which recedes as larger models become available. Recent studies (Oh and Schuler, 2023) have shown this scaling inverts after a point, as LMs become excessively large and accurate, when word prediction probability (as information-theoretic surprisal) is used as a predictor. Other studies propose the use of entire vectors from differently sized LLMs, still showing positive scaling (Schrimpf et al., 2021), casting doubt on the value of surprisal as a predictor, but do not control for the larger number of predictors in vectors from larger LMs. This study evaluates LLM scaling using entire LLM vectors, while controlling for the larger number of predictors in vectors from larger LLMs. Results show that inverse scaling obtains, suggesting that inadequacies in predicting human reading time and brain imaging data may be due to substantial misalignment between LLMs and human sentence processing, which worsens as larger models are used.