Linguistic properties and model scale in brain encoding: from small to compressed language models

📅 2026-02-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how language model scale and compression strategies influence alignment with human brain activity representations, aiming to identify the minimal model capacity required for effective neural alignment. Through systematic evaluation of models ranging from 1B to 7B parameters—including compressed variants using GPTQ quantization and pruning—the authors find that a 3B-parameter model achieves brain prediction performance comparable to larger models, while most compression techniques substantially reduce computational costs without significantly compromising neural predictive accuracy. Notably, the research reveals a dissociation between task performance and brain alignment: although compressed models retain strong alignment with fMRI responses, their linguistic capabilities—particularly in syntax and discourse—degrade markedly. These findings suggest that brain alignment saturates around the 3B scale, offering critical guidance for designing efficient neural language models.

Technology Category

Application Category

📝 Abstract
Recent work has shown that scaling large language models (LLMs) improves their alignment with human brain activity, yet it remains unclear what drives these gains and which representational properties are responsible. Although larger models often yield better task performance and brain alignment, they are increasingly difficult to analyze mechanistically. This raises a fundamental question: what is the minimal model capacity required to capture brain-relevant representations? To address this question, we systematically investigate how constraining model scale and numerical precision affects brain alignment. We compare full-precision LLMs, small language models (SLMs), and compressed variants (quantized and pruned) by predicting fMRI responses during naturalistic language comprehension. Across model families up to 14B parameters, we find that 3B SLMs achieve brain predictivity indistinguishable from larger LLMs, whereas 1B models degrade substantially, particularly in semantic language regions. Brain alignment is remarkably robust to compression: most quantization and pruning methods preserve neural predictivity, with GPTQ as a consistent exception. Linguistic probing reveals a dissociation between task performance and brain predictivity: compression degrades discourse, syntax, and morphology, yet brain predictivity remains largely unchanged. Overall, brain alignment saturates at modest model scales and is resilient to compression, challenging common assumptions about neural scaling and motivating compact models for brain-aligned language modeling.
Problem

Research questions and friction points this paper is trying to address.

brain encoding
language models
model scale
compression
neural alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

brain alignment
small language models
model compression
neural predictivity
linguistic probing
🔎 Similar Papers
No similar papers found.
S
Subba Reddy Oota
TU Berlin, Germany
Vijay Rowtula
Vijay Rowtula
IIIT Hyderabad
Computer VisionNatural Language Processing
S
Satya Sai Srinath Namburi
GE HealthCare
K
Khushbu Pahwa
AWS AI Labs, Amazon
A
Anant Khandelwal
Microsoft Research, Bangalore, India
Manish Gupta
Manish Gupta
Bing, Microsoft
Deep LearningNatural language processingWeb MiningData miningNeuroscience
Tanmoy Chakraborty
Tanmoy Chakraborty
Associate Professor, IIT Delhi, India
Natural Language ProcessingLarge Language ModelsSocial Computing
B
Bapi S. Raju
IIIT-Hyderabad, India