Language Models as Causal Effect Generators

📅 2024-11-12
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Existing benchmarks for evaluating causal inference methods and auditing implicit causal reasoning in large language models (LLMs) lack controllability and the capacity to generate counterfactual data. Method: We propose Sequence-Driven Structural Causal Models (SD-SCMs), a novel framework that treats LLMs as structural equation providers—integrated with user-specified directed acyclic graphs (DAGs)—to automatically generate observational, interventional, and individual-level counterfactual datasets. Contribution/Results: (1) SD-SCMs enable interpretable SCM construction without manual specification of functional forms; (2) we introduce a new causal benchmark comprising thousands of heterogeneous datasets, enabling systematic evaluation of effect estimators’ robustness under varying confounding conditions; (3) we provide an auditable, attributable mechanism for detecting implicit causal effects embedded within LLMs. This framework bridges the gap between causal reasoning evaluation and generative model introspection, supporting rigorous, scalable, and reproducible causal assessment.

Technology Category

Application Category

📝 Abstract
We present a framework for large language model (LLM) based data generation with controllable causal structure. In particular, we define a procedure for turning any language model and any directed acyclic graph (DAG) into a sequence-driven structural causal model (SD-SCM). Broadly speaking, an SD-SCM is a causal model with user-defined structure and LLM-defined structural equations. We characterize how an SD-SCM allows sampling from observational, interventional, and counterfactual distributions according to the desired causal structure. We then leverage this procedure to propose a new type of benchmark for causal inference methods, generating individual-level counterfactual data without needing to manually specify functional relationships between variables. We create an example benchmark consisting of thousands of datasets, and test a suite of popular estimation methods on these datasets for average, conditional average, and individual treatment effect estimation, both with and without hidden confounding. Apart from generating data, the same procedure also allows us to test for the presence of a causal effect that might be encoded in an LLM. This procedure can underpin auditing LLMs for misinformation, discrimination, or otherwise undesirable behavior. We believe SD-SCMs can serve as a useful tool in any application that would benefit from sequential data with controllable causal structure.
Problem

Research questions and friction points this paper is trying to address.

Proposing a framework for causal models with language-defined mechanisms
Creating benchmarks to test treatment effect estimation methods
Auditing language models for desirable and undesirable causal effects
Innovation

Methods, ideas, or system contributions that make the work stand out.

SD-SCMs combine user-defined structure with language-model mechanisms
Generates observational, interventional, and counterfactual distribution samples
Creates benchmark for auditing causal inference methods and LMs
🔎 Similar Papers
No similar papers found.