Efficient Level-Crossing Probability Calculation for Gaussian Process Modeled Data

📅 2025-12-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational complexity of level-crossing probability computation for Gaussian processes (GPs) in high-resolution scientific data, this paper proposes an efficient approximation algorithm. Methodologically, we construct a hierarchical spatial index (e.g., an octree) to enable adaptive domain partitioning, evaluating crossing probabilities only in regions where they are non-zero. Furthermore, we derive region-wise upper bounds on crossing probabilities by leveraging the GP kernel structure and observed data, enabling rapid pruning and adaptive index reconstruction. Compared to conventional probabilistic Marching Cubes, our approach significantly reduces computational cost while rigorously preserving accuracy guarantees. Experiments on multiple synthetic and real-world datasets demonstrate both high quantitative accuracy and real-time visualization capability. The method thus provides a scalable solution for large-scale uncertainty quantification and visualization of Gaussian process models.

Technology Category

Application Category

📝 Abstract
Almost all scientific data have uncertainties originating from different sources. Gaussian process regression (GPR) models are a natural way to model data with Gaussian-distributed uncertainties. GPR also has the benefit of reducing I/O bandwidth and storage requirements for large scientific simulations. However, the reconstruction from the GPR models suffers from high computation complexity. To make the situation worse, classic approaches for visualizing the data uncertainties, like probabilistic marching cubes, are also computationally very expensive, especially for data of high resolutions. In this paper, we accelerate the level-crossing probability calculation efficiency on GPR models by subdividing the data spatially into a hierarchical data structure and only reconstructing values adaptively in the regions that have a non-zero probability. For each region, leveraging the known GPR kernel and the saved data observations, we propose a novel approach to efficiently calculate an upper bound for the level-crossing probability inside the region and use this upper bound to make the subdivision and reconstruction decisions. We demonstrate that our value occurrence probability estimation is accurate with a low computation cost by experiments that calculate the level-crossing probability fields on different datasets.
Problem

Research questions and friction points this paper is trying to address.

Accelerates level-crossing probability calculation for Gaussian process models
Reduces computational complexity in uncertainty visualization for large datasets
Uses adaptive hierarchical subdivision to efficiently estimate probability bounds
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical subdivision for adaptive reconstruction
Upper bound calculation for level-crossing probability
Efficient probability estimation with low computation cost
🔎 Similar Papers
No similar papers found.
H
Haoyu Li
The Ohio State University, Los Alamos National Laboratory
I
Isaac J Michaud
Los Alamos National Laboratory
A
Ayan Biswas
Los Alamos National Laboratory
Han-Wei Shen
Han-Wei Shen
Program Director, NSF IIS/HCC; EIC, IEEE TVCG; Professor, The Ohio State University
visualizationcomputer graphicsdata analyticsmachine learning