Language Statistics and False Belief Reasoning: Evidence from 41 Open-Weight LMs

📅 2026-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether language models possess the theory-of-mind capacity to understand false beliefs and examines whether linguistic experience can account for certain features of human social cognition. By replicating and extending classic false-belief tasks across 41 open-source language models, the research systematically evaluates model sensitivity to implicit knowledge states and compares their performance with human behavior. It presents the first large-scale validation using open-source models that statistical regularities in language can explain aspects of human cognitive biases, proposing and confirming a novel hypothesis: non-factive verb cues influence belief attribution. Results show that 34% of models exhibit sensitivity to knowledge states—an effect that strengthens with model scale—and reveal a striking alignment between humans and models in belief-attribution biases under non-factive verbs, supporting the view that linguistic distributions partially account for this cognitive phenomenon.

Technology Category

Application Category

📝 Abstract
Research on mental state reasoning in language models (LMs) has the potential to inform theories of human social cognition--such as the theory that mental state reasoning emerges in part from language exposure--and our understanding of LMs themselves. Yet much published work on LMs relies on a relatively small sample of closed-source LMs, limiting our ability to rigorously test psychological theories and evaluate LM capacities. Here, we replicate and extend published work on the false belief task by assessing LM mental state reasoning behavior across 41 open-weight models (from distinct model families). We find sensitivity to implied knowledge states in 34% of the LMs tested; however, consistent with prior work, none fully ``explain away'' the effect in humans. Larger LMs show increased sensitivity and also exhibit higher psychometric predictive power. Finally, we use LM behavior to generate and test a novel hypothesis about human cognition: both humans and LMs show a bias towards attributing false beliefs when knowledge states are cued using a non-factive verb (``John thinks...'') than when cued indirectly (``John looks in the...''). Unlike the primary effect of knowledge states, where human sensitivity exceeds that of LMs, the magnitude of the human knowledge cue effect falls squarely within the distribution of LM effect sizes-suggesting that distributional statistics of language can in principle account for the latter but not the former in humans. These results demonstrate the value of using larger samples of open-weight LMs to test theories of human cognition and evaluate LM capacities.
Problem

Research questions and friction points this paper is trying to address.

false belief reasoning
language models
mental state reasoning
open-weight LMs
language statistics
Innovation

Methods, ideas, or system contributions that make the work stand out.

false belief reasoning
open-weight language models
mental state reasoning
distributional language statistics
cognitive modeling
🔎 Similar Papers
No similar papers found.