🤖 AI Summary
Gaussian processes (GPs) often suffer from prediction bias and miscalibrated uncertainty in nonstationary modeling due to reliance on static, stationary kernels. To address this, we propose a learnable, adaptive, and interpretable kernel framework grounded in first principles: we construct a differentiable, parametric kernel structure that rigorously satisfies symmetry and positive definiteness; enable flexible nonstationary modeling via basis kernel mappings, balancing interpretability and generalization; and integrate neuron-inspired kernel design with Bayesian uncertainty quantification. Extensive experiments on multiple benchmarks demonstrate substantial improvements in mean prediction accuracy and uncertainty calibration quality. Our approach exhibits strong robustness to hyperparameter choices and architectural variations, consistently outperforming state-of-the-art static and existing nonstationary kernel methods across all evaluation metrics.
📝 Abstract
Gaussian processes (GPs) are powerful probabilistic models that define flexible priors over functions, offering strong interpretability and uncertainty quantification. However, GP models often rely on simple, stationary kernels which can lead to suboptimal predictions and miscalibrated uncertainty estimates, especially in nonstationary real-world applications. In this paper, we introduce SEEK, a novel class of learnable kernels to model complex, nonstationary functions via GPs. Inspired by artificial neurons, SEEK is derived from first principles to ensure symmetry and positive semi-definiteness, key properties of valid kernels. The proposed method achieves flexible and adaptive nonstationarity by learning a mapping from a set of base kernels. Compared to existing techniques, our approach is more interpretable and much less prone to overfitting. We conduct comprehensive sensitivity analyses and comparative studies to demonstrate that our approach is not robust to only many of its design choices, but also outperforms existing stationary/nonstationary kernels in both mean prediction accuracy and uncertainty quantification.