Modular Jump Gaussian Processes

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional Gaussian processes (GPs) assume stationarity, limiting their ability to model nonstationary processes with abrupt changes (jumps). While existing jump GPs (JGPs) capture discontinuities, they rely on computationally expensive joint inference, compromising scalability and modularity. This paper proposes a modular jump GP (Modular JGP) that decouples two core tasks: (1) learning optimal neighborhood size via manifold-aware adaptive local kernels, and (2) explicitly modeling output-level disparities across jumps using clustering-driven latent features. These components are independently optimized yet synergistically integrated, eliminating the need for joint inference and significantly improving modeling flexibility and computational efficiency. On diverse synthetic and real-world jump benchmarks, even a single module outperforms mainstream baselines; combining both modules further enhances predictive accuracy and uncertainty calibration quality.

Technology Category

Application Category

📝 Abstract
Gaussian processes (GPs) furnish accurate nonlinear predictions with well-calibrated uncertainty. However, the typical GP setup has a built-in stationarity assumption, making it ill-suited for modeling data from processes with sudden changes, or"jumps"in the output variable. The"jump GP"(JGP) was developed for modeling data from such processes, combining local GPs and latent"level"variables under a joint inferential framework. But joint modeling can be fraught with difficulty. We aim to simplify by suggesting a more modular setup, eschewing joint inference but retaining the main JGP themes: (a) learning optimal neighborhood sizes that locally respect manifolds of discontinuity; and (b) a new cluster-based (latent) feature to capture regions of distinct output levels on both sides of the manifold. We show that each of (a) and (b) separately leads to dramatic improvements when modeling processes with jumps. In tandem (but without requiring joint inference) that benefit is compounded, as illustrated on real and synthetic benchmark examples from the recent literature.
Problem

Research questions and friction points this paper is trying to address.

Modeling data with sudden changes or jumps using Gaussian processes
Avoiding joint inference difficulties in jump Gaussian processes
Improving accuracy by learning optimal neighborhood sizes and cluster-based features
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modular setup avoids joint inference complexity
Learns optimal neighborhood sizes near discontinuities
Uses cluster-based latent feature for distinct levels
🔎 Similar Papers
No similar papers found.
A
Anna R. Flowers
Department of Statistics, Virginia Tech
C
Christopher T. Franck
Department of Statistics, Virginia Tech
M
Mickael Binois
Université Côte d'Azur, Inria, CNRS, LJAD, France
Chiwoo Park
Chiwoo Park
Professor of Industrial and Systems Engineering, University of Washington
Machine LearningComputer VisionShape AnalysisSurrogate Modeling
Robert B. Gramacy
Robert B. Gramacy
Virginia Tech
statisticsmachine learningoptimizationuncertainty quantification