Your Assumed DAG is Wrong and Here's How To Deal With It

📅 2025-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In causal inference, prior knowledge is often encoded as a single directed acyclic graph (DAG), yet such an assumption is frequently misspecified in practice. Method: We propose a robust causal effect estimation framework that does not rely on a single deterministic DAG. Instead, we introduce the first differentiable gradient-based optimization framework operating over a large set of DAGs compatible with prior knowledge. Our approach jointly enforces causal graph constraints and linear or nonlinear structural equation models, and efficiently computes tight upper and lower bounds on causal queries—e.g., the average treatment effect—via boundary propagation. Contribution/Results: Compared to conventional methods based on a single DAG or a Markov equivalence class, our framework achieves significantly improved bound coverage and sharpness on both synthetic and real-world datasets. It provides a computationally tractable and empirically verifiable solution to the fundamental question: “How reliable are causal conclusions when the assumed DAG is incorrect?”

Technology Category

Application Category

📝 Abstract
Assuming a directed acyclic graph (DAG) that represents prior knowledge of causal relationships between variables is a common starting point for cause-effect estimation. Existing literature typically invokes hypothetical domain expert knowledge or causal discovery algorithms to justify this assumption. In practice, neither may propose a single DAG with high confidence. Domain experts are hesitant to rule out dependencies with certainty or have ongoing disputes about relationships; causal discovery often relies on untestable assumptions itself or only provides an equivalence class of DAGs and is commonly sensitive to hyperparameter and threshold choices. We propose an efficient, gradient-based optimization method that provides bounds for causal queries over a collection of causal graphs -- compatible with imperfect prior knowledge -- that may still be too large for exhaustive enumeration. Our bounds achieve good coverage and sharpness for causal queries such as average treatment effects in linear and non-linear synthetic settings as well as on real-world data. Our approach aims at providing an easy-to-use and widely applicable rebuttal to the valid critique of `What if your assumed DAG is wrong?'.
Problem

Research questions and friction points this paper is trying to address.

Addressing uncertainty in assumed DAGs
Providing bounds for causal queries
Optimizing causal effect estimation accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Gradient-based optimization method
Bounds for causal queries
Handles imperfect prior knowledge
🔎 Similar Papers
No similar papers found.
Kirtan Padh
Kirtan Padh
PhD Candidate, TU Munich, Helmholtz AI
Causal InferenceAI EthicsAI Governance
Z
Zhufeng Li
Technical University of Munich, Helmholtz Munich, Munich Center for Machine Learning (MCML)
C
Cecilia Casolo
Technical University of Munich, Helmholtz Munich, Munich Center for Machine Learning (MCML)
Niki Kilbertus
Niki Kilbertus
Technical University of Munich & Helmholtz Munich
Machine Learning