Causal Lifting of Neural Representations: Zero-Shot Generalization for Causal Inferences

📅 2025-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses cross-experimental causal inference under target experiments with no labeled factual outcomes—a setting where conventional empirical risk minimization (ERM) fails due to distributional shifts. We propose a novel “causal uplift” paradigm and the deconfounded empirical risk minimization (DERM) framework, which jointly integrates experimental-setting-aware neural representation learning and synthetic target distribution modeling to achieve zero-shot causal generalization—transferring causal reasoning capability to unseen target domains without any target-label supervision. On the ISTAnt benchmark, our method achieves the first reported zero-shot causal inference performance, substantially outperforming standard ERM baselines. Extensive evaluations on both synthetic and real-world scientific datasets confirm its causal validity, robustness to confounding, and strong generalization across experimental domains.

Technology Category

Application Category

📝 Abstract
A plethora of real-world scientific investigations is waiting to scale with the support of trustworthy predictive models that can reduce the need for costly data annotations. We focus on causal inferences on a target experiment with unlabeled factual outcomes, retrieved by a predictive model fine-tuned on a labeled similar experiment. First, we show that factual outcome estimation via Empirical Risk Minimization (ERM) may fail to yield valid causal inferences on the target population, even in a randomized controlled experiment and infinite training samples. Then, we propose to leverage the observed experimental settings during training to empower generalization to downstream interventional investigations, ``Causal Lifting'' the predictive model. We propose Deconfounded Empirical Risk Minimization (DERM), a new simple learning procedure minimizing the risk over a fictitious target population, preventing potential confounding effects. We validate our method on both synthetic and real-world scientific data. Notably, for the first time, we zero-shot generalize causal inferences on ISTAnt dataset (without annotation) by causal lifting a predictive model on our experiment variant.
Problem

Research questions and friction points this paper is trying to address.

Improve causal inference generalization
Reduce costly data annotations
Prevent confounding effects in predictions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Causal Lifting for zero-shot generalization
Deconfounded Empirical Risk Minimization
Generalization to interventional investigations
🔎 Similar Papers
No similar papers found.