Dissecting the Ullman Variations with a SCALPEL: Why do LLMs fail at Trivial Alterations to the False Belief Task?

📅 2024-06-20
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the root cause of large language models’ (LLMs) poor performance on simplified false-belief tasks: whether it stems from a fundamental lack of theory of mind (ToM) capacity or from limitations in implicit commonsense reasoning. To address this, we propose SCALPEL—a novel, prompt-engineering–based, counterfactual-perturbation framework for interpretable diagnosis. SCALPEL explicitly injects commonsense premises (e.g., “glass is transparent, enabling internal visibility”) to systematically disentangle mental-state representation from commonsense dependency. Experimental results demonstrate that LLMs’ failure is primarily attributable not to ToM deficits, but to missing commonsense knowledge; accuracy improves significantly when critical premises are made explicit. This work introduces the first attributable and intervenable diagnostic tool for ToM evaluation, shifting large-model cognitive assessment from black-box classification toward mechanistic analysis.

Technology Category

Application Category

📝 Abstract
Recent empirical results have sparked a debate about whether or not Large Language Models (LLMs) are capable of Theory of Mind (ToM). While some have found LLMs to be successful on ToM evaluations such as the False Belief task (Kosinski, 2023), others have argued that LLMs solve these tasks by exploiting spurious correlations -- not representing beliefs -- since they fail on trivial alterations to these tasks (Ullman, 2023). In this paper, we introduce SCALPEL: a technique to generate targeted modifications for False Belief tasks to test different specific hypotheses about why LLMs fail. We find that modifications which make explicit common inferences -- such as that looking at a transparent object implies recognizing its contents -- preserve LLMs' performance. This suggests that LLMs' failures on modified ToM tasks could result from a lack of more general commonsense reasoning, rather than a failure to represent mental states. We argue that SCALPEL could be helpful for explaining LLM successes and failures in other cases.
Problem

Research questions and friction points this paper is trying to address.

Why LLMs fail at trivial False Belief Task alterations
Testing LLMs' Theory of Mind robustness with SCALPEL
Identifying common-sense inference gaps in LLM reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

SCALPEL technique for incremental stimuli modification
Tests LLMs on common-sense inference failures
Analyzes transparent-access task variations
🔎 Similar Papers
No similar papers found.