🤖 AI Summary
Real-world multiscale problems necessitate modeling data distributions—not individual points. To address this, we propose Generative Distribution Embedding (GDE), the first autoencoder framework generalized to the Wasserstein distribution space: its encoder learns distributional representations of sample sets, while a generator replaces the conventional decoder to ensure isometric embedding satisfying distributional invariance. The Euclidean distance in latent space approximates the Wasserstein-2 distance, and linear interpolation corresponds to optimal transport geodesics; theoretically, GDE converges to predictive sufficient statistics. Our method unifies conditional generative models (e.g., GAN/VAE variants), distributional encoders, and optimal transport geometry. Extensive experiments on synthetic data demonstrate consistent superiority over state-of-the-art baselines. Moreover, GDE achieves significant improvements across six computational biology tasks—including billion-scale single-cell and biological sequence analysis—enhancing cell population representation, perturbation response prediction, phenotypic modeling, and sequence design.
📝 Abstract
Many real-world problems require reasoning across multiple scales, demanding models which operate not on single data points, but on entire distributions. We introduce generative distribution embeddings (GDE), a framework that lifts autoencoders to the space of distributions. In GDEs, an encoder acts on sets of samples, and the decoder is replaced by a generator which aims to match the input distribution. This framework enables learning representations of distributions by coupling conditional generative models with encoder networks which satisfy a criterion we call distributional invariance. We show that GDEs learn predictive sufficient statistics embedded in the Wasserstein space, such that latent GDE distances approximately recover the $W_2$ distance, and latent interpolation approximately recovers optimal transport trajectories for Gaussian and Gaussian mixture distributions. We systematically benchmark GDEs against existing approaches on synthetic datasets, demonstrating consistently stronger performance. We then apply GDEs to six key problems in computational biology: learning representations of cell populations from lineage-tracing data (150K cells), predicting perturbation effects on single-cell transcriptomes (1M cells), predicting perturbation effects on cellular phenotypes (20M single-cell images), modeling tissue-specific DNA methylation patterns (253M sequences), designing synthetic yeast promoters (34M sequences), and spatiotemporal modeling of viral protein sequences (1M sequences).