Optimal Kernel Learning for Gaussian Process Models with High-Dimensional Input

📅 2025-02-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
High-dimensional Gaussian process (GP) regression suffers from prohibitive computational cost and degraded predictive accuracy. To address this, we propose a sparse kernel learning method that jointly incorporates effect heredity constraints and optimal experimental design to automatically identify “active variables”—inputs exerting significant influence on the output. Our approach innovatively integrates the Fedorov–Wynn algorithm into GP kernel structure optimization, leveraging convex combination kernels and a sparsity-inducing variable selection mechanism. This ensures model interpretability while substantially improving both active variable identification accuracy and prediction performance. Experimental results across multiple benchmark problems demonstrate superior accuracy and efficiency over state-of-the-art methods, effectively mitigating the curse of dimensionality. The proposed framework establishes a new paradigm for efficient, interpretable surrogate modeling of complex systems.

Technology Category

Application Category

📝 Abstract
Gaussian process (GP) regression is a popular surrogate modeling tool for computer simulations in engineering and scientific domains. However, it often struggles with high computational costs and low prediction accuracy when the simulation involves too many input variables. For some simulation models, the outputs may only be significantly influenced by a small subset of the input variables, referred to as the ``active variables''. We propose an optimal kernel learning approach to identify these active variables, thereby overcoming GP model limitations and enhancing system understanding. Our method approximates the original GP model's covariance function through a convex combination of kernel functions, each utilizing low-dimensional subsets of input variables. Inspired by the Fedorov-Wynn algorithm from optimal design literature, we develop an optimal kernel learning algorithm to determine this approximation. We incorporate the effect heredity principle, a concept borrowed from the field of ``design and analysis of experiments'', to ensure sparsity in active variable selection. Through several examples, we demonstrate that the proposed method outperforms alternative approaches in correctly identifying active input variables and improving prediction accuracy. It is an effective solution for interpreting the surrogate GP regression and simplifying the complex underlying system.
Problem

Research questions and friction points this paper is trying to address.

High-dimensional input challenge
Identify active variables
Improve prediction accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimal kernel learning method
Convex combination of kernels
Fedorov-Wynn algorithm adaptation
🔎 Similar Papers
No similar papers found.