đ¤ AI Summary
This study investigates whether large language models (LLMs) exhibit systematic cognitive biases in causal learning, particularly spurious attribution in zero-contingency medical scenariosâwhere no true causal relationship exists.
Method: We construct a rigorously controlled dataset of 1,000 medical judgment items, each exhibiting zero statistical correlation between putative cause and effect, inspired by classical psychological paradigms. Using structured prompt engineering, we evaluate causal inference performance across multiple state-of-the-art LLMs.
Contribution/Results: All models consistently generate non-existent causal conclusions with statistically significant bias, confirming a pervasive âcausal hallucinationâ phenomenonâsuperficial mimicry of causal language without genuine causal reasoning capability. This work provides the first quantitative, experimentally controlled validation of LLMsâ systematic causal misjudgment tendency. It delivers a critical caution for deploying LLMs in high-stakes domains like healthcare and motivates the development of rigorous causal understanding evaluation frameworks.
đ Abstract
Causal learning is the cognitive process of developing the capability of making causal inferences based on available information, often guided by normative principles. This process is prone to errors and biases, such as the illusion of causality, in which people perceive a causal relationship between two variables despite lacking supporting evidence. This cognitive bias has been proposed to underlie many societal problems, including social prejudice, stereotype formation, misinformation, and superstitious thinking. In this work, we examine whether large language models are prone to developing causal illusions when faced with a classic cognitive science paradigm: the contingency judgment task. To investigate this, we constructed a dataset of 1,000 null contingency scenarios (in which the available information is not sufficient to establish a causal relationship between variables) within medical contexts and prompted LLMs to evaluate the effectiveness of potential causes. Our findings show that all evaluated models systematically inferred unwarranted causal relationships, revealing a strong susceptibility to the illusion of causality. While there is ongoing debate about whether LLMs genuinely understand causality or merely reproduce causal language without true comprehension, our findings support the latter hypothesis and raise concerns about the use of language models in domains where accurate causal reasoning is essential for informed decision-making.