🤖 AI Summary
Current generative AI evaluation lacks a unified framework for jointly assessing causal and compositional reasoning—particularly how causal effects propagate and compose over structured, graph-based representations.
Method: We introduce “Compositional Causal Reasoning” (CCR), formalizing the ability to model causal-effect propagation and synthesis in graph-structured domains. We propose the first systematic CCR benchmark, integrating structural causal model (SCM) semantics, causal graph modeling, and math-word-problem generation to quantitatively evaluate Average Treatment Effect (ATE) and Probability of Necessity and Sufficiency (PNS). Our evaluation covers multiple generations of LLMs (LLaMA, Phi, GPT, o1).
Contribution/Results: Experiments reveal distinct categorical error patterns under deep causal paths; notably, o1 significantly outperforms all others and is the only model whose error rate remains stable with increasing path depth. This work establishes a scalable, interpretable, and semantically grounded CCR evaluation paradigm for assessing causal reasoning in large language models.
📝 Abstract
Causal reasoning and compositional reasoning are two core aspirations in generative AI. Measuring the extent of these behaviors requires principled evaluation methods. We explore a unified perspective that considers both behaviors simultaneously, termed compositional causal reasoning (CCR): the ability to infer how causal measures compose and, equivalently, how causal quantities propagate through graphs. We instantiate a framework for the systematic evaluation of CCR for the average treatment effect and the probability of necessity and sufficiency. As proof of concept, we demonstrate the design of CCR tasks for language models in the LLama, Phi, and GPT families. On a math word problem, our framework revealed a range of taxonomically distinct error patterns. Additionally, CCR errors increased with the complexity of causal paths for all models except o1.