🤖 AI Summary
This work addresses the underutilization of low-dimensional structures in high-dimensional data within conditional simulation, particularly when the support of the conditional measure is genuinely low-dimensional. To this end, the authors propose the Conditional Wasserstein Autoencoder (CWAE) framework, which integrates Wasserstein autoencoders with (block) triangular mappings and imposes independence assumptions in the latent space to enable efficient conditional sampling. The approach is theoretically grounded through its connection to the conditional optimal transport problem. Three architectural variants are developed and evaluated numerically, demonstrating substantial improvements over the low-rank ensemble Kalman filter, with markedly reduced approximation errors—especially in scenarios where the conditional support is intrinsically low-dimensional.
📝 Abstract
We present Conditional Wasserstein Autoencoders (CWAEs), a framework for conditional simulation that exploits low-dimensional structure in both the conditioned and the conditioning variables. The key idea is to modify a Wasserstein autoencoder to use a (block-) triangular decoder and impose an appropriate independence assumption on the latent variables. We show that the resulting model gives an autoencoder that can exploit low-dimensional structure while simultaneously the decoder can be used for conditional simulation. We explore various theoretical properties of CWAEs, including their connections to conditional optimal transport (OT) problems. We also present alternative formulations that lead to three architectural variants forming the foundation of our algorithms. We present a series of numerical experiments that demonstrate that our different CWAE variants achieve substantial reductions in approximation error relative to the low-rank ensemble Kalman filter (LREnKF), particularly in problems where the support of the conditional measures is truly low-dimensional.