Can Post-Training Transform LLMs into Causal Reasoners?

📅 2026-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited performance of large language models (LLMs) in causal reasoning tasks, which hinders their reliability for non-expert decision-making. To this end, the authors introduce CauGym, a benchmark dataset encompassing seven core causal reasoning tasks, and systematically evaluate five post-training methods—SFT, DPO, KTO, PPO, and GRPO—for enhancing causal reasoning capabilities. Experimental results demonstrate that a 14B-parameter model, after targeted post-training, achieves 93.5% accuracy on the CaLM benchmark, substantially outperforming OpenAI o3 (55.4%). Moreover, the model exhibits strong robustness and generalization under distributional shifts and noisy conditions. This study provides the first empirical validation that relatively small-scale LLMs can acquire powerful and reliable causal reasoning abilities through tailored post-training strategies.

Technology Category

Application Category

📝 Abstract
Causal inference is essential for decision-making but remains challenging for non-experts. While large language models (LLMs) show promise in this domain, their precise causal estimation capabilities are still limited, and the impact of post-training on these abilities is insufficiently explored. This paper examines the extent to which post-training can enhance LLMs'capacity for causal inference. We introduce CauGym, a comprehensive dataset comprising seven core causal tasks for training and five diverse test sets. Using this dataset, we systematically evaluate five post-training approaches: SFT, DPO, KTO, PPO, and GRPO. Across five in-domain and four existing benchmarks, our experiments demonstrate that appropriate post-training enables smaller LLMs to perform causal inference competitively, often surpassing much larger models. Our 14B parameter model achieves 93.5% accuracy on the CaLM benchmark, compared to 55.4% by OpenAI o3. Furthermore, the post-trained LLMs exhibit strong generalization and robustness under real-world conditions such as distribution shifts and noisy data. Collectively, these findings provide the first systematic evidence that targeted post-training can produce reliable and robust LLM-based causal reasoners. Our data and GRPO-model are available at https://github.com/OpenCausaLab/CauGym.
Problem

Research questions and friction points this paper is trying to address.

causal inference
large language models
post-training
causal reasoning
LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

post-training
causal reasoning
large language models
CauGym
GRPO
🔎 Similar Papers
No similar papers found.