CausalReasoningBenchmark: A Real-World Benchmark for Disentangled Evaluation of Causal Identification and Estimation

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing causal inference benchmarks, which conflate identification and estimation, thereby obscuring the root causes of model failures. To resolve this, we introduce the first decoupled evaluation framework, comprising 138 real-world datasets and 173 structured queries derived from scientific literature. The framework requires systems to separately output causal identification strategies and point estimates, enabling independent assessment through a component-wise scoring mechanism. Our experiments reveal that while large language models achieve an 84% accuracy in high-level identification strategies, their performance drops to only 30% when evaluated on complete, formally correct identification specifications. This gap highlights that current bottlenecks lie in reasoning about study design details rather than numerical computation, underscoring the critical role of our benchmark in advancing robust, automated causal inference systems.

Technology Category

Application Category

📝 Abstract
Many benchmarks for automated causal inference evaluate a system's performance based on a single numerical output, such as an Average Treatment Effect (ATE). This approach conflates two distinct steps in causal analysis: identification-formulating a valid research design under stated assumptions-and estimation-implementing that design numerically on finite data. We introduce CausalReasoningBenchmark, a benchmark of 173 queries across 138 real-world datasets, curated from 85 peer-reviewed research papers and four widely-used causal-inference textbooks. For each query a system must produce (i) a structured identification specification that names the strategy, the treatment, outcome, and control variables, and all design-specific elements, and (ii) a point estimate with a standard error. By scoring these two components separately, our benchmark enables granular diagnosis: it distinguishes failures in causal reasoning from errors in numerical execution. Baseline results with a state-of-the-art LLM show that, while the model correctly identifies the high-level strategy in 84 % of cases, full identification-specification correctness drops to only 30 %, revealing that the bottleneck lies in the nuanced details of research design rather than in computation. CausalReasoningBenchmark is publicly available on Hugging Face and is designed to foster the development of more robust automated causal-inference systems.
Problem

Research questions and friction points this paper is trying to address.

causal inference
causal identification
causal estimation
benchmark
disentangled evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

causal identification
causal estimation
structured reasoning
benchmark
disentangled evaluation
🔎 Similar Papers
No similar papers found.