🤖 AI Summary
This work addresses the limited out-of-distribution generalization of supervised causal learning when intervention targets are unknown. It introduces test-time training into causal discovery for the first time, proposing a novel approach that integrates self-augmented data generation with joint causal inference. During testing, the method dynamically generates augmented samples for each instance to mitigate distributional shifts. Building upon the PC algorithm, it employs a two-stage supervised learning framework that balances identifiability of the causal structure with predictive performance. Experiments on the bnlearn benchmark demonstrate that the proposed method significantly outperforms existing approaches, achieving state-of-the-art results in both causal graph recovery and intervention target detection.
📝 Abstract
Supervised causal learning has shown promise in causal discovery, yet it often struggles with generalization across diverse interventional settings, particularly when intervention targets are unknown. To address this, we propose TICL (Test-time Interventional Causal Learning), a novel method that synergizes Test-Time Training with Joint Causal Inference. Specifically, we design a self-augmentation strategy to generate instance-specific training data at test time, effectively avoiding distribution shifts. Furthermore, by integrating joint causal inference, we developed a PC-inspired two-phase supervised learning scheme, which effectively leverages self-augmented training data while ensuring theoretical identifiability. Extensive experiments on bnlearn benchmarks demonstrate TICL's superiority in multiple aspects of causal discovery and intervention target detection.