🤖 AI Summary
This study investigates whether large language models (LLMs) replicate human cognitive biases in causal reasoning or instead rely on rule-based strategies. Drawing on 11 collider-structure causal tasks, the authors systematically compare the performance of over 20 LLMs against human participants, employing causal graph modeling, semantic perturbations, prompt overload, and chain-of-thought (CoT) prompting to dissect underlying reasoning mechanisms. The findings reveal that LLMs exhibit markedly fewer hallmark human biases—such as weak explanation elimination and violations of the Markov condition—and their judgments are often well approximated by simple rule-based models. Moreover, LLMs outperform humans under irrelevant information interference, and CoT significantly enhances their robustness, though it may falter in contexts involving intrinsic uncertainty. This work provides the first evidence of the rule-dominated nature of LLMs’ causal reasoning, offering a novel perspective on their cognitive architecture.
📝 Abstract
Large language models (LLMs) are increasingly used in domains where causal reasoning matters, yet it remains unclear whether their judgments reflect normative causal computation, human-like shortcuts, or brittle pattern matching. We benchmark 20+ LLMs against a matched human baseline on 11 causal judgment tasks formalized by a collider structure ($C_1 \!\rightarrow\! E\! \leftarrow \!C_2$). We find that a small interpretable model compresses LLMs'causal judgments well and that most LLMs exhibit more rule-like reasoning strategies than humans who seem to account for unmentioned latent factors in their probability judgments. Furthermore, most LLMs do not mirror the characteristic human collider biases of weak explaining away and Markov violations. We probe LLMs'causal judgment robustness under (i) semantic abstraction and (ii) prompt overloading (injecting irrelevant text), and find that chain-of-thought (CoT) increases robustness for many LLMs. Together, this divergence suggests LLMs can complement humans when known biases are undesirable, but their rule-like reasoning may break down when uncertainty is intrinsic -- highlighting the need to characterize LLM reasoning strategies for safe, effective deployment.