Do Large Language Models Show Biases in Causal Learning? Insights from Contingency Judgment

📅 2025-10-15
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
This study investigates whether large language models (LLMs) exhibit systematic cognitive biases in causal learning, particularly spurious attribution in zero-contingency medical scenarios—where no true causal relationship exists. Method: We construct a rigorously controlled dataset of 1,000 medical judgment items, each exhibiting zero statistical correlation between putative cause and effect, inspired by classical psychological paradigms. Using structured prompt engineering, we evaluate causal inference performance across multiple state-of-the-art LLMs. Contribution/Results: All models consistently generate non-existent causal conclusions with statistically significant bias, confirming a pervasive “causal hallucination” phenomenon—superficial mimicry of causal language without genuine causal reasoning capability. This work provides the first quantitative, experimentally controlled validation of LLMs’ systematic causal misjudgment tendency. It delivers a critical caution for deploying LLMs in high-stakes domains like healthcare and motivates the development of rigorous causal understanding evaluation frameworks.

Technology Category

Application Category

📝 Abstract
Causal learning is the cognitive process of developing the capability of making causal inferences based on available information, often guided by normative principles. This process is prone to errors and biases, such as the illusion of causality, in which people perceive a causal relationship between two variables despite lacking supporting evidence. This cognitive bias has been proposed to underlie many societal problems, including social prejudice, stereotype formation, misinformation, and superstitious thinking. In this work, we examine whether large language models are prone to developing causal illusions when faced with a classic cognitive science paradigm: the contingency judgment task. To investigate this, we constructed a dataset of 1,000 null contingency scenarios (in which the available information is not sufficient to establish a causal relationship between variables) within medical contexts and prompted LLMs to evaluate the effectiveness of potential causes. Our findings show that all evaluated models systematically inferred unwarranted causal relationships, revealing a strong susceptibility to the illusion of causality. While there is ongoing debate about whether LLMs genuinely understand causality or merely reproduce causal language without true comprehension, our findings support the latter hypothesis and raise concerns about the use of language models in domains where accurate causal reasoning is essential for informed decision-making.
Problem

Research questions and friction points this paper is trying to address.

Examining LLM susceptibility to causal illusions in contingency judgment tasks
Assessing unwarranted causal inferences in medical null-contingency scenarios
Investigating whether LLMs reproduce causal language without true comprehension
Innovation

Methods, ideas, or system contributions that make the work stand out.

Evaluated LLMs on null contingency scenarios
Constructed medical dataset for causal judgment
Tested susceptibility to causal illusion bias
🔎 Similar Papers
No similar papers found.