Linear-LLM-SCM: Benchmarking LLMs for Coefficient Elicitation in Linear-Gaussian Causal Models

📅 2026-02-10
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the capability of large language models (LLMs) to perform quantitative causal reasoning—specifically, estimating structural equation coefficients—in linear Gaussian causal models with continuous variables. To this end, the authors propose a plug-and-play benchmarking framework that decomposes any given directed acyclic graph (DAG) into local parent–child node sets, prompts LLMs via natural language to generate regression-based structural equations, and quantifies causal modeling performance by comparing estimated coefficients against ground-truth parameters. This framework establishes the first evaluation suite for LLMs in quantitative causal inference over continuous linear models, enabling seamless integration with arbitrary DAGs and off-the-shelf LLMs, with an open-source implementation provided. Experiments reveal that current LLMs exhibit substantial randomness in coefficient estimation, are sensitive to DAG misspecification, and display instability under structural or semantic perturbations, highlighting their limitations as reliable tools for quantitative causal analysis.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have shown potential in identifying qualitative causal relations, but their ability to perform quantitative causal reasoning -- estimating effect sizes that parametrize functional relationships -- remains underexplored in continuous domains. We introduce Linear-LLM-SCM, a plug-and-play benchmarking framework for evaluating LLMs on linear Gaussian structural causal model (SCM) parametrization when the DAG is given. The framework decomposes a DAG into local parent-child sets and prompts an LLM to produce a regression-style structural equation per node, which is aggregated and compared against available ground-truth parameters. Our experiments show several challenges in such benchmarking tasks, namely, strong stochasticity in the results in some of the models and susceptibility to DAG misspecification via spurious edges in the continuous domains. Across models, we observe substantial variability in coefficient estimates for some settings and sensitivity to structural and semantic perturbations, highlighting current limitations of LLMs as quantitative causal parameterizers. We also open-sourced the benchmarking framework so that researchers can utilize their DAGs and any off-the-shelf LLMs plug-and-play for evaluation in their domains effortlessly.
Problem

Research questions and friction points this paper is trying to address.

LLM
causal inference
coefficient elicitation
structural causal model
quantitative reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Linear-LLM-SCM
causal reasoning
structural causal model
coefficient elicitation
benchmarking
🔎 Similar Papers
No similar papers found.
K
Kanta Yamaoka
Data Science and its Applications, German Research Centre for Artificial Intelligence (DFKI), Germany; Dept. of Computer Science, University of Kaiserslautern–Landau (RPTU), Germany
S
Sumantrak Mukherjee
Data Science and its Applications, German Research Centre for Artificial Intelligence (DFKI), Germany
Thomas Gärtner
Thomas Gärtner
TU Wien (Technical University of Vienna)
Machine LearningData Mining
D
David Antony Selby
Data Science and its Applications, German Research Centre for Artificial Intelligence (DFKI), Germany
S
Stefan Konigorski
Digital Health - Machine Learning Research Group, Hasso Plattner Institute for Digital Engineering, Germany; Hasso Plattner Institute for Digital Health at Mount Sinai, Icahn School of Medicine at Mount Sinai, USA
Eyke HĂźllermeier
Eyke HĂźllermeier
Professor of Computer Science, Paderborn University
Artificial IntelligenceMachine LearningFuzzy LogicBioinformatics
Viktor Bengs
Viktor Bengs
German Research Center for Artificial Intelligence (DFKI)
Bandit algorithmsPreference learningUncertainty QuantificationAlgorithm Configuration
S
Sebastian Josef Vollmer
Data Science and its Applications, German Research Centre for Artificial Intelligence (DFKI), Germany; Dept. of Computer Science, University of Kaiserslautern–Landau (RPTU), Germany