🤖 AI Summary
In high-dimensional physical data (e.g., from LHC experiments), systematic uncertainties often cannot be factorized across input dimensions, yet existing uncertainty quantification methods impose restrictive factorizability assumptions—leading to significant inaccuracies.
Method: We propose a factorization-free, computationally efficient framework for systematic uncertainty quantification. It employs a derivative-enhanced surrogate model based on Gaussian process regression and integrates Bayesian experimental design for adaptive sampling.
Contribution/Results: Our approach substantially improves both accuracy and computational efficiency in estimating non-factorizable systematic errors. In representative benchmarks, it achieves up to 40% lower estimation error than conventional random or grid-based sampling—using fewer evaluation points. The framework scales effectively to high-dimensional settings and constitutes the first methodology that simultaneously ensures theoretical rigor and practical applicability for systematic error modeling in complex experimental physics.
📝 Abstract
Accurate assessment of systematic uncertainties is an increasingly vital task in physics studies, where large, high-dimensional datasets, like those collected at the Large Hadron Collider, hold the key to new discoveries. Common approaches to assessing systematic uncertainties rely on simplifications, such as assuming that the impact of the various sources of uncertainty factorizes. In this paper, we provide realistic example scenarios in which this assumption fails. We introduce an algorithm that uses Gaussian process regression to estimate the impact of systematic uncertainties extit{without} assuming factorization. The Gaussian process models are enhanced with derivative information, which increases the accuracy of the regression without increasing the number of samples. In addition, we present a novel sampling strategy based on Bayesian experimental design, which is shown to be more efficient than random and grid sampling in our example scenarios.