🤖 AI Summary
Standard Wasserstein distance leads to discontinuous objective functions in robust optimization of causal models, undermining stability and generalizability.
Method: We propose a continuity-theoretic framework based on the $G$-causal Wasserstein distance. We first prove that interventional robust optimization is continuous under this distance. Then, we design a Causal Normalizing Flow architecture—endowed with universal approximation capability—that explicitly encodes graph-structured priors to strictly preserve causal mechanisms. By jointly optimizing the generative model and the $G$-causal Wasserstein distance, we achieve causally consistent data augmentation.
Results: Experiments on causal regression and mean-variance portfolio optimization demonstrate that our method significantly outperforms non-causal generative baselines, validating the critical benefit of causal structure guidance for robust decision-making.
📝 Abstract
In this paper, we show that interventionally robust optimization problems in causal models are continuous under the $G$-causal Wasserstein distance, but may be discontinuous under the standard Wasserstein distance. This highlights the importance of using generative models that respect the causal structure when augmenting data for such tasks. To this end, we propose a new normalizing flow architecture that satisfies a universal approximation property for causal structural models and can be efficiently trained to minimize the $G$-causal Wasserstein distance. Empirically, we demonstrate that our model outperforms standard (non-causal) generative models in data augmentation for causal regression and mean-variance portfolio optimization in causal factor models.