🤖 AI Summary
This study investigates how language model scale and compression strategies influence alignment with human brain activity representations, aiming to identify the minimal model capacity required for effective neural alignment. Through systematic evaluation of models ranging from 1B to 7B parameters—including compressed variants using GPTQ quantization and pruning—the authors find that a 3B-parameter model achieves brain prediction performance comparable to larger models, while most compression techniques substantially reduce computational costs without significantly compromising neural predictive accuracy. Notably, the research reveals a dissociation between task performance and brain alignment: although compressed models retain strong alignment with fMRI responses, their linguistic capabilities—particularly in syntax and discourse—degrade markedly. These findings suggest that brain alignment saturates around the 3B scale, offering critical guidance for designing efficient neural language models.
📝 Abstract
Recent work has shown that scaling large language models (LLMs) improves their alignment with human brain activity, yet it remains unclear what drives these gains and which representational properties are responsible. Although larger models often yield better task performance and brain alignment, they are increasingly difficult to analyze mechanistically. This raises a fundamental question: what is the minimal model capacity required to capture brain-relevant representations? To address this question, we systematically investigate how constraining model scale and numerical precision affects brain alignment. We compare full-precision LLMs, small language models (SLMs), and compressed variants (quantized and pruned) by predicting fMRI responses during naturalistic language comprehension. Across model families up to 14B parameters, we find that 3B SLMs achieve brain predictivity indistinguishable from larger LLMs, whereas 1B models degrade substantially, particularly in semantic language regions. Brain alignment is remarkably robust to compression: most quantization and pruning methods preserve neural predictivity, with GPTQ as a consistent exception. Linguistic probing reveals a dissociation between task performance and brain predictivity: compression degrades discourse, syntax, and morphology, yet brain predictivity remains largely unchanged. Overall, brain alignment saturates at modest model scales and is resilient to compression, challenging common assumptions about neural scaling and motivating compact models for brain-aligned language modeling.