Causal representation learning from network data

πŸ“… 2025-09-01
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Addressing causal representation learning under non-i.i.d. conditions in network-structured data, this paper proposes GraCE-VAEβ€”the first framework to extend causal identifiability theory from the i.i.d. setting to structured graph data. GraCE-VAE integrates graph neural networks with a difference-driven variational autoencoder to jointly model observational and interventional data, enabling simultaneous recovery of the latent causal graph structure and intervention effects. Its core innovation lies in explicitly leveraging network topology as a structural prior to enhance causal disentanglement. Extensive experiments on three gene perturbation datasets demonstrate that GraCE-VAE significantly outperforms existing baselines, achieving state-of-the-art performance in both causal graph reconstruction and intervention effect estimation. These results empirically validate the critical role of network structure in improving causal representation learning.

Technology Category

Application Category

πŸ“ Abstract
Causal disentanglement from soft interventions is identifiable under the assumptions of linear interventional faithfulness and availability of both observational and interventional data. Previous research has looked into this problem from the perspective of i.i.d. data. Here, we develop a framework, GraCE-VAE, for non-i.i.d. settings, in which structured context in the form of network data is available. GraCE-VAE integrates discrepancy-based variational autoencoders with graph neural networks to jointly recover the true latent causal graph and intervention effects. We show that the theoretical results of identifiability from i.i.d. data hold in our setup. We also empirically evaluate GraCE-VAE against state-of-the-art baselines on three genetic perturbation datasets to demonstrate the impact of leveraging structured context for causal disentanglement.
Problem

Research questions and friction points this paper is trying to address.

Causal disentanglement from soft interventions under network data
Recovering latent causal graph and intervention effects jointly
Leveraging structured context for improved causal representation learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Graph neural networks integrate structured context data
Variational autoencoders recover latent causal graphs
Framework handles non-i.i.d. network data settings
πŸ”Ž Similar Papers
No similar papers found.