Compositional Causal Reasoning Evaluation in Language Models

📅 2025-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current generative AI evaluation lacks a unified framework for jointly assessing causal and compositional reasoning—particularly how causal effects propagate and compose over structured, graph-based representations. Method: We introduce “Compositional Causal Reasoning” (CCR), formalizing the ability to model causal-effect propagation and synthesis in graph-structured domains. We propose the first systematic CCR benchmark, integrating structural causal model (SCM) semantics, causal graph modeling, and math-word-problem generation to quantitatively evaluate Average Treatment Effect (ATE) and Probability of Necessity and Sufficiency (PNS). Our evaluation covers multiple generations of LLMs (LLaMA, Phi, GPT, o1). Contribution/Results: Experiments reveal distinct categorical error patterns under deep causal paths; notably, o1 significantly outperforms all others and is the only model whose error rate remains stable with increasing path depth. This work establishes a scalable, interpretable, and semantically grounded CCR evaluation paradigm for assessing causal reasoning in large language models.

Technology Category

Application Category

📝 Abstract
Causal reasoning and compositional reasoning are two core aspirations in generative AI. Measuring the extent of these behaviors requires principled evaluation methods. We explore a unified perspective that considers both behaviors simultaneously, termed compositional causal reasoning (CCR): the ability to infer how causal measures compose and, equivalently, how causal quantities propagate through graphs. We instantiate a framework for the systematic evaluation of CCR for the average treatment effect and the probability of necessity and sufficiency. As proof of concept, we demonstrate the design of CCR tasks for language models in the LLama, Phi, and GPT families. On a math word problem, our framework revealed a range of taxonomically distinct error patterns. Additionally, CCR errors increased with the complexity of causal paths for all models except o1.
Problem

Research questions and friction points this paper is trying to address.

Evaluates compositional causal reasoning in language models.
Measures causal reasoning and compositional reasoning simultaneously.
Identifies error patterns in causal reasoning tasks.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified framework for compositional causal reasoning
Systematic evaluation of causal measures in AI
Task design for language models like LLama, GPT
🔎 Similar Papers
No similar papers found.
J
Jacqueline Maasch
Cornell Tech
A
Alihan Huyuk
Harvard University
Xinnuo Xu
Xinnuo Xu
Microsoft Research Cambridge
natural language processingmachine learning
A
Aditya V. Nori
Microsoft Research Cambridge
J
Javier Gonzalez
Microsoft Research Cambridge