🤖 AI Summary
This paper addresses the identifiability of causal effects under causal abstraction—specifically, how to infer treatment effects when the underlying causal graph is incompletely known, as commonly encountered in high-dimensional or complex systems.
Method: We propose the first hierarchical identifiability criterion system tailored to causal abstraction, systematically linking abstract causal structures at varying granularities to their corresponding identification capabilities. Our theoretical framework integrates causal graph models, observational data, and formal logical reasoning, yielding decidable, layered identifiability criteria.
Contribution/Results: The framework does not require a fully specified causal graph; it enables identifiability assessment even when the causal structure is unknown or only partially known. We validate its effectiveness and practicality through rigorous analysis of canonical examples from the causal inference literature.
📝 Abstract
Identifying the effect of a treatment from observational data typically requires assuming a fully specified causal diagram. However, such diagrams are rarely known in practice, especially in complex or high-dimensional settings. To overcome this limitation, recent works have explored the use of causal abstractions-simplified representations that retain partial causal information. In this paper, we consider causal abstractions formalized as collections of causal diagrams, and focus on the identifiability of causal queries within such collections. We introduce and formalize several identifiability criteria under this setting. Our main contribution is to organize these criteria into a structured hierarchy, highlighting their relationships. This hierarchical view enables a clearer understanding of what can be identified under varying levels of causal knowledge. We illustrate our framework through examples from the literature and provide tools to reason about identifiability when full causal knowledge is unavailable.