Partial Transportability for Domain Generalization

📅 2025-03-30
🏛️ Neural Information Processing Systems
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
In domain generalization, providing provable performance guarantees for predictions on unseen target distributions remains fundamentally challenging due to the absence of target-domain data. Method: This paper proposes a tight bound estimation framework grounded in partial identifiability and transferability theory. It introduces the first general-purpose transferability estimator by adapting neural causal models (NCMs) to satisfy cross-population structural constraints and designs a gradient-based optimization algorithm for scalable inference. Contribution/Results: We establish theoretical guarantees on estimator consistency and expressive completeness. Empirical evaluation across multi-source domain settings demonstrates that our approach significantly improves the tightness and robustness of upper bounds on target-domain generalization error—particularly under black-box conditions—thereby establishing a novel paradigm for trustworthy transfer learning.

Technology Category

Application Category

📝 Abstract
A fundamental task in AI is providing performance guarantees for predictions made in unseen domains. In practice, there can be substantial uncertainty about the distribution of new data, and corresponding variability in the performance of existing predictors. Building on the theory of partial identification and transportability, this paper introduces new results for bounding the value of a functional of the target distribution, such as the generalization error of a classifier, given data from source domains and assumptions about the data generating mechanisms, encoded in causal diagrams. Our contribution is to provide the first general estimation technique for transportability problems, adapting existing parameterization schemes such Neural Causal Models to encode the structural constraints necessary for cross-population inference. We demonstrate the expressiveness and consistency of this procedure and further propose a gradient-based optimization scheme for making scalable inferences in practice. Our results are corroborated with experiments.
Problem

Research questions and friction points this paper is trying to address.

Providing performance guarantees for predictions in unseen domains
Bounding generalization error using source data and causal assumptions
Developing scalable inference methods for transportability problems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses causal diagrams for domain generalization
Adapts Neural Causal Models for transportability
Proposes gradient-based optimization for scalable inference
🔎 Similar Papers
No similar papers found.
K
Kasra Jalaldoust
Causal Artificial Intelligence Lab, Columbia University
Alexis Bellot
Alexis Bellot
DeepMind
CausalityMachine LearningHealthcare
E
E. Bareinboim
Causal Artificial Intelligence Lab, Columbia University