🤖 AI Summary
Existing graph distance measures for acyclic directed mixed graphs (ADMGs) lack causal interpretability, especially under latent variable confounding, as they ignore the impact of structural differences on downstream causal effect estimation.
Method: We propose a task-oriented graph distance metric explicitly designed for causal effect estimation. Grounded in the *fixing* operator and symbolic computation, it quantifies the deviation in identifiable interventional effect estimates induced by structural discrepancies between ADMGs.
Contribution/Results: Unlike conventional topology-based distances (e.g., graph edit distance), our metric defines graph distance as the discrepancy in identifiable causal effects—ensuring both causal interpretability and task relevance. Experiments across diverse graph perturbation scenarios demonstrate that the proposed measure is significantly more sensitive to causally meaningful structural changes than baseline metrics, thereby enhancing the validity and practical utility of causal graph evaluation under latent confounding.
📝 Abstract
Causal discovery aims to recover graphs that represent causal relations among given variables from observations, and new methods are constantly being proposed. Increasingly, the community raises questions about how much progress is made, because properly evaluating discovered graphs remains notoriously difficult, particularly under latent confounding. We propose a graph distance measure for acyclic directed mixed graphs (ADMGs) based on the downstream task of cause-effect estimation under unobserved confounding. Our approach uses identification via fixing and a symbolic verifier to quantify how graph differences distort cause-effect estimands for different treatment-outcome pairs. We analyze the behavior of the measure under different graph perturbations and compare it against existing distance metrics.