Incorporating Interventional Independence Improves Robustness against Interventional Distribution Shift

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods for learning robust discriminative representations of causally relevant latent variables neglect the causal independence structure embedded in interventional data, leading to degraded generalization under intervention distributions—particularly pronounced when interventional samples are scarce. To address this, we propose RepLIn, the first framework that explicitly models and enforces statistical independence between intervention variables and their non-descendant latent variables. Grounded in linear causal models, we derive a verifiable sufficient condition for such independence and incorporate a lightweight independence regularization term into training. RepLIn is agnostic to intervention variable types (continuous or discrete) and exhibits strong scalability. Experiments across synthetic, image, and text domains demonstrate that RepLIn significantly improves predictive robustness under intervention distributions, effectively narrowing the performance gap between observational and interventional settings.

Technology Category

Application Category

📝 Abstract
We consider the problem of learning robust discriminative representations of causally-related latent variables. In addition to observational data, the training dataset also includes interventional data obtained through targeted interventions on some of these latent variables to learn representations robust against the resulting interventional distribution shifts. Existing approaches treat interventional data like observational data, even when the underlying causal model is known, and ignore the independence relations that arise from these interventions. Since these approaches do not fully exploit the causal relational information resulting from interventions, they learn representations that produce large disparities in predictive performance on observational and interventional data, which worsens when the number of interventional training samples is limited. In this paper, (1) we first identify a strong correlation between this performance disparity and adherence of the representations to the independence conditions induced by the interventional causal model. (2) For linear models, we derive sufficient conditions on the proportion of interventional data in the training dataset, for which enforcing interventional independence between representations corresponding to the intervened node and its non-descendants lowers the error on interventional data. Combining these insights, (3) we propose RepLIn, a training algorithm to explicitly enforce this statistical independence during interventions. We demonstrate the utility of RepLIn on a synthetic dataset and on real image and text datasets on facial attribute classification and toxicity detection, respectively. Our experiments show that RepLIn is scalable with the number of nodes in the causal graph and is suitable to improve the robust representations against interventional distribution shifts of both continuous and discrete latent variables.
Problem

Research questions and friction points this paper is trying to address.

Learning robust representations of causally-related latent variables
Reducing performance disparity between observational and interventional data
Enforcing interventional independence to improve model robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Enforces interventional independence in representations
Uses causal model for robust discriminative learning
Proposes RepLIn algorithm for scalable robustness
🔎 Similar Papers
No similar papers found.