Interpretable Neural Causal Models with TRAM-DAGs

πŸ“… 2025-03-20
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing neural causal models suffer from poor interpretability, restricted variable-type support (continuous or discrete only), and inability to uniformly handle Pearl’s three-layer causal inference (observational, interventional, counterfactual). To address these limitations, we propose TRAM-DAG, an interpretable neural causal model that embeds Transformative Regression Models (TRAM) into Structural Causal Model (SCM) structural equations, explicitly incorporating prior DAG knowledge. TRAM-DAG unifies modeling of continuous, ordinal, and binary variables; its parameters bear explicit causal semantics, enabling full L1–L3 causal queries. Experiments demonstrate that TRAM-DAG matches or surpasses state-of-the-art methods under full observability, retains robustness in counterfactual estimation with unobserved confounding, and reliably performs L3 (counterfactual) reasoning across three canonical causal graph structures.

Technology Category

Application Category

πŸ“ Abstract
The ultimate goal of most scientific studies is to understand the underlying causal mechanism between the involved variables. Structural causal models (SCMs) are widely used to represent such causal mechanisms. Given an SCM, causal queries on all three levels of Pearl's causal hierarchy can be answered: $L_1$ observational, $L_2$ interventional, and $L_3$ counterfactual. An essential aspect of modeling the SCM is to model the dependency of each variable on its causal parents. Traditionally this is done by parametric statistical models, such as linear or logistic regression models. This allows to handle all kinds of data types and fit interpretable models but bears the risk of introducing a bias. More recently neural causal models came up using neural networks (NNs) to model the causal relationships, allowing the estimation of nearly any underlying functional form without bias. However, current neural causal models are generally restricted to continuous variables and do not yield an interpretable form of the causal relationships. Transformation models range from simple statistical regressions to complex networks and can handle continuous, ordinal, and binary data. Here, we propose to use TRAMs to model the functional relationships in SCMs allowing us to bridge the gap between interpretability and flexibility in causal modeling. We call this method TRAM-DAG and assume currently that the underlying directed acyclic graph is known. For the fully observed case, we benchmark TRAM-DAGs against state-of-the-art statistical and NN-based causal models. We show that TRAM-DAGs are interpretable but also achieve equal or superior performance in queries ranging from $L_1$ to $L_3$ in the causal hierarchy. For the continuous case, TRAM-DAGs allow for counterfactual queries for three common causal structures, including unobserved confounding.
Problem

Research questions and friction points this paper is trying to address.

Bridging interpretability and flexibility in causal modeling.
Handling continuous, ordinal, and binary data in causal models.
Enabling counterfactual queries in complex causal structures.
Innovation

Methods, ideas, or system contributions that make the work stand out.

TRAM-DAGs bridge interpretability and flexibility.
Handles continuous, ordinal, and binary data types.
Achieves superior performance in causal queries.
πŸ”Ž Similar Papers
2024-05-03Annual International ACM SIGIR Conference on Research and Development in Information RetrievalCitations: 2
Beate Sick
Beate Sick
ZHAW, UZH
deep learningstatisticscausalitymedical research
O
Oliver Durr
HTWG Konstanz, Germany; TIDIT.ch, Switzerland