🤖 AI Summary
This work investigates why large language models (LLMs) perform near-chance in causal relation classification: is the limitation due to insufficient causal examples in pretraining data, or inherent structural deficits in causal representation? We systematically evaluate seven mainstream LLMs on a corpus of 18K+ medical texts, using four fine-grained causal categories (direct, conditional, correlational, non-causal) and a source-paraphrase selection task. Evaluation employs multiple metrics—accuracy, output entropy, and expected calibration error (ECE). Key findings: no significant accuracy difference between seen and unseen causal sentences (p > 0.05); source-text selection rate is only 24.8%; output entropy approaches the theoretical maximum; and instruction-tuned models exhibit severe confidence–accuracy miscalibration (e.g., Qwen achieves just 32.8% accuracy at >95% confidence). These results indicate that LLMs lack structured causal representations—not due to sparse causal exposure in training data, but owing to fundamental deficits in deep causal reasoning.
📝 Abstract
Recent papers show LLMs achieve near-random accuracy in causal relation classification, raising questions about whether such failures arise from limited pretraining exposure or deeper representational gaps. We investigate this under uncertainty-based evaluation, testing whether pretraining exposure to causal examples improves causal understanding >18K PubMed sentences -- half from The Pile corpus, half post-2024 -- across seven models (Pythia-1.4B/7B/12B, GPT-J-6B, Dolly-7B/12B, Qwen-7B). We analyze model behavior through: (i) causal classification, where the model identifies causal relationships in text, and (ii) verbatim memorization probing, where we assess whether the model prefers previously seen causal statements over their paraphrases. Models perform four-way classification (direct/conditional/correlational/no-relationship) and select between originals and their generated paraphrases. Results show almost identical accuracy on seen/unseen sentences (p > 0.05), no memorization bias (24.8% original selection), and output distribution over the possible options is almost flat, with entropic values near the maximum (1.35/1.39), confirming random guessing. Instruction-tuned models show severe miscalibration (Qwen: > 95% confidence, 32.8% accuracy, ECE=0.49). Conditional relations induce highest entropy (+11% vs. direct). These findings suggest that failures in causal understanding arise from the lack of structured causal representation, rather than insufficient exposure to causal examples during pretraining.