🤖 AI Summary
This work addresses the limitations of Gaussian process surrogates—namely their high computational complexity, strong stationarity assumptions, and restrictive Gaussian predictive distributions—which hinder their applicability to large-scale non-stationary data. To overcome these challenges, the authors propose the Generative Bayesian Computation (GBC) framework, which uniquely integrates implicit quantile networks with Bayesian computation to learn the full conditional quantile function via a single forward pass, thereby enabling flexible, non-Gaussian predictive distributions. GBC further incorporates stochastic prior ensembles, boundary-aware enhancements, and an active learning strategy. Evaluated on 14 benchmark datasets, GBC outperforms state-of-the-art methods on 12, achieving up to a 46% improvement in Continuous Ranked Probability Score (CRPS) and demonstrating linear scalability to 90,000 training points; under active learning, it reduces RMSE by nearly threefold.
📝 Abstract
Gaussian process (GP) surrogates are the default tool for emulating expensive computer experiments, but cubic cost, stationarity assumptions, and Gaussian predictive distributions limit their reach. We propose Generative Bayesian Computation (GBC) via Implicit Quantile Networks (IQNs) as a surrogate framework that targets all three limitations. GBC learns the full conditional quantile function from input--output pairs; at test time, a single forward pass per quantile level produces draws from the predictive distribution.
Across fourteen benchmarks we compare GBC to four GP-based methods. GBC improves CRPS by 11--26\% on piecewise jump-process benchmarks, by 14\% on a ten-dimensional Friedman function, and scales linearly to 90,000 training points where dense-covariance GPs are infeasible. A boundary-augmented variant matches or outperforms Modular Jump GPs on two-dimensional jump datasets (up to 46\% CRPS improvement). In active learning, a randomized-prior IQN ensemble achieves nearly three times lower RMSE than deep GP active learning on Rocket LGBB. Overall, GBC records a favorable point estimate in 12 of 14 comparisons. GPs retain an edge on smooth surfaces where their smoothness prior provides effective regularization.