Bayesian Covariance Uncertainty for Adaptive Pilot-Sampling Termination in Multi-fidelity Uncertainty Quantification

📅 2025-08-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In multifidelity uncertainty quantification, inaccurate covariance estimation severely degrades variance reduction efficiency: insufficient pilot samples induce large bias, while excessive sampling wastes computational budget. This paper proposes a Bayesian adaptive pilot-sampling termination strategy. It introduces a γ-Gaussian prior to enable efficient posterior updates and guaranteed positive-definite projection of the covariance matrix, and designs an interpretable loss criterion that decomposes variance inefficiency into accuracy loss and cost penalty, enabling budget-aware sampling termination. The method integrates approximate control variates (ACV) with multifidelity Monte Carlo estimation. Evaluated on polynomial benchmarks and Darcy flow permeability problems, it achieves near-oracle covariance performance using only a small number of pilot samples, significantly enhancing estimation robustness and resource utilization efficiency under limited budgets.

Technology Category

Application Category

📝 Abstract
Monte Carlo integration becomes prohibitively expensive when each sample requires a high-fidelity model evaluation. Multi-fidelity uncertainty quantification methods mitigate this by combining estimators from high- and low-fidelity models, preserving unbiasedness while reducing variance under a fixed budget. Constructing such estimators optimally requires the model-output covariance matrix, typically estimated from pilot samples. Too few pilot samples lead to inaccurate covariance estimates and suboptimal estimators, while too many consume budget that could be used for final estimation. We propose a Bayesian framework to quantify covariance uncertainty from pilot samples, incorporating prior knowledge and enabling probabilistic assessments of estimator performance. A central component is a flexible $γ$-Gaussian prior that ensures computational tractability and supports efficient posterior projection under additional pilot samples. These tools enable adaptive pilot-sampling termination via an interpretable loss criterion that decomposes variance inefficiency into accuracy and cost components. While demonstrated here in the context of approximate control variates (ACV), the framework generalizes to other multi-fidelity estimators. We validate the approach on a monomial benchmark and a PDE-based Darcy flow problem. Across these tests, our adaptive method demonstrates its value for multi-fidelity estimation under limited pilot budgets and expensive models, achieving variance reduction comparable to baseline estimators with oracle covariance.
Problem

Research questions and friction points this paper is trying to address.

Optimizing pilot sample allocation for multi-fidelity uncertainty quantification
Quantifying covariance uncertainty through Bayesian framework with prior knowledge
Adaptively terminating pilot sampling to balance estimation accuracy and cost
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian framework quantifies covariance uncertainty from pilot samples
Adaptive pilot-sampling termination via interpretable loss criterion
Flexible γ-Gaussian prior ensures computational tractability and efficiency
🔎 Similar Papers
No similar papers found.
T
Thomas E. Coons
Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI 48109
A
Aniket Jivani
Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI 48109
Xun Huan
Xun Huan
Associate Professor of Mechanical Engineering, University of Michigan
Uncertainty QuantificationOptimal Experimental DesignBayesian MethodsMachine Learning