🤖 AI Summary
Standard Gaussian process regression tends to yield overconfident posterior intervals and biased decisions when inputs are subject to measurement errors. To address this issue, this work models noisy inputs as probability measures and proposes the Deterministic Projection Wasserstein ARD Gaussian Process (PWAGP). By leveraging the Wasserstein distance, the method constructs a closed-form, positive-definite, and scalable covariance function for distributional inputs. Unlike approaches relying on latent variables or Monte Carlo sampling, PWAGP avoids stochastic approximations, thereby enhancing the transparency and robustness of uncertainty quantification. The resulting framework maintains computational efficiency while significantly improving predictive reliability under input noise.
📝 Abstract
Gaussian process (GP) regression is widely used for uncertainty quantification, yet the standard formulation assumes noise-free covariates. When inputs are measured with error, this errors-in-variables (EIV) setting can lead to optimistically narrow posterior intervals and biased decisions. We study GP regression under input measurement uncertainty by representing each noisy input as a probability measure and defining covariance through Wasserstein distances between these measures. Building on this perspective, we instantiate a deterministic projected Wasserstein ARD (PWA) kernel whose one-dimensional components admit closed-form expressions and whose product structure yields a scalable, positive-definite kernel on distributions. Unlike latent-input GP models, PWA-based GPs (\PWAGPs) handle input noise without introducing unobserved covariates or Monte Carlo projections, making uncertainty quantification more transparent and robust.