Are LLMs Biased Like Humans? Causal Reasoning as a Function of Prior Knowledge, Irrelevant Information, and Reasoning Budget

📅 2026-02-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language models (LLMs) replicate human cognitive biases in causal reasoning or instead rely on rule-based strategies. Drawing on 11 collider-structure causal tasks, the authors systematically compare the performance of over 20 LLMs against human participants, employing causal graph modeling, semantic perturbations, prompt overload, and chain-of-thought (CoT) prompting to dissect underlying reasoning mechanisms. The findings reveal that LLMs exhibit markedly fewer hallmark human biases—such as weak explanation elimination and violations of the Markov condition—and their judgments are often well approximated by simple rule-based models. Moreover, LLMs outperform humans under irrelevant information interference, and CoT significantly enhances their robustness, though it may falter in contexts involving intrinsic uncertainty. This work provides the first evidence of the rule-dominated nature of LLMs’ causal reasoning, offering a novel perspective on their cognitive architecture.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly used in domains where causal reasoning matters, yet it remains unclear whether their judgments reflect normative causal computation, human-like shortcuts, or brittle pattern matching. We benchmark 20+ LLMs against a matched human baseline on 11 causal judgment tasks formalized by a collider structure ($C_1 \!\rightarrow\! E\! \leftarrow \!C_2$). We find that a small interpretable model compresses LLMs'causal judgments well and that most LLMs exhibit more rule-like reasoning strategies than humans who seem to account for unmentioned latent factors in their probability judgments. Furthermore, most LLMs do not mirror the characteristic human collider biases of weak explaining away and Markov violations. We probe LLMs'causal judgment robustness under (i) semantic abstraction and (ii) prompt overloading (injecting irrelevant text), and find that chain-of-thought (CoT) increases robustness for many LLMs. Together, this divergence suggests LLMs can complement humans when known biases are undesirable, but their rule-like reasoning may break down when uncertainty is intrinsic -- highlighting the need to characterize LLM reasoning strategies for safe, effective deployment.
Problem

Research questions and friction points this paper is trying to address.

causal reasoning
cognitive bias
large language models
collider structure
human-like reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

causal reasoning
large language models
cognitive bias
chain-of-thought
collider structure
🔎 Similar Papers
No similar papers found.
H
Hanna M. Dettki
Department of Psychology, New York University, NY, USA; Department of Computer Science, University of Tübingen, Germany
Charley M. Wu
Charley M. Wu
Professor of Computational Cognitive Science, TU Darmstadt
GeneralizationExplorationCompositionalitySocial learningCompression
B
Bob Rehder
Department of Psychology, New York University, NY, USA