🤖 AI Summary
This work addresses the challenge of learning causal models from data that are both structurally interpretable and consistent with the semantic requirements of causal abstraction. To this end, the authors propose the Consistent Abstraction Network (CAN), which uniquely integrates causal abstraction with sheaf theory to construct a learnable architecture adhering to semantic embedding principles. The global learning problem is decomposed into local Riemannian optimization subproblems, thereby circumventing the difficulties associated with non-convex optimization. A SPECTRAL iterative algorithm enables closed-form updates under both positive-definite and positive-semidefinite covariance matrices, allowing efficient solution of these local subproblems. Experiments on synthetic data demonstrate that the method accurately recovers diverse CAN structures and significantly outperforms existing approaches.
📝 Abstract
Causal artificial intelligence aims to enhance explainability, trustworthiness, and robustness in AI by leveraging structural causal models (SCMs). In this pursuit, recent advances formalize network sheaves and cosheaves of causal knowledge. Pushing in the same direction, we tackle the learning of consistent causal abstraction network (CAN), a sheaf-theoretic framework where (i) SCMs are Gaussian, (ii) restriction maps are transposes of constructive linear causal abstractions (CAs) adhering to the semantic embedding principle, and (iii) edge stalks correspond--up to permutation--to the node stalks of more detailed SCMs. Our problem formulation separates into edge-specific local Riemannian problems and avoids nonconvex objectives. We propose an efficient search procedure, solving the local problems with SPECTRAL, our iterative method with closed-form updates and suitable for positive definite and semidefinite covariance matrices. Experiments on synthetic data show competitive performance in the CA learning task, and successful recovery of diverse CAN structures.