🤖 AI Summary
This work proposes the Generalized Gaussian Mixture Process (GGMP) to address the challenges of multimodality, heteroscedasticity, and strong non-Gaussianity in conditional density estimation. By modeling local densities with Gaussian mixtures, aligning mixture components across inputs, and training each component via independent heteroscedastic Gaussian processes, GGMP yields closed-form predictive mixture densities while remaining compatible with standard Gaussian process solvers. The approach overcomes the unimodal limitation of conventional Gaussian processes and circumvents the exponential complexity associated with latent variable assignments in naive multimodal GP formulations. As a result, GGMP substantially enhances the capacity to approximate complex non-Gaussian distributions. Its efficacy and scalability are demonstrated through experiments on both synthetic and real-world datasets.
📝 Abstract
Conditional density estimation is complicated by multimodality, heteroscedasticity, and strong non-Gaussianity. Gaussian processes (GPs) provide a principled nonparametric framework with calibrated uncertainty, but standard GP regression is limited by its unimodal Gaussian predictive form. We introduce the Generalized Gaussian Mixture Process (GGMP), a GP-based method for multimodal conditional density estimation in settings where each input may be associated with a complex output distribution rather than a single scalar response. GGMP combines local Gaussian mixture fitting, cross-input component alignment and per-component heteroscedastic GP training to produce a closed-form Gaussian mixture predictive density. The method is tractable, compatible with standard GP solvers and scalable methods, and avoids the exponentially large latent-assignment structure of naive multimodal GP formulations. Empirically, GGMPs improve distributional approximation on synthetic and real-world datasets with pronounced non-Gaussianity and multimodality.