🤖 AI Summary
In multifidelity uncertainty quantification, inaccurate covariance estimation severely degrades variance reduction efficiency: insufficient pilot samples induce large bias, while excessive sampling wastes computational budget. This paper proposes a Bayesian adaptive pilot-sampling termination strategy. It introduces a γ-Gaussian prior to enable efficient posterior updates and guaranteed positive-definite projection of the covariance matrix, and designs an interpretable loss criterion that decomposes variance inefficiency into accuracy loss and cost penalty, enabling budget-aware sampling termination. The method integrates approximate control variates (ACV) with multifidelity Monte Carlo estimation. Evaluated on polynomial benchmarks and Darcy flow permeability problems, it achieves near-oracle covariance performance using only a small number of pilot samples, significantly enhancing estimation robustness and resource utilization efficiency under limited budgets.
📝 Abstract
Monte Carlo integration becomes prohibitively expensive when each sample requires a high-fidelity model evaluation. Multi-fidelity uncertainty quantification methods mitigate this by combining estimators from high- and low-fidelity models, preserving unbiasedness while reducing variance under a fixed budget. Constructing such estimators optimally requires the model-output covariance matrix, typically estimated from pilot samples. Too few pilot samples lead to inaccurate covariance estimates and suboptimal estimators, while too many consume budget that could be used for final estimation. We propose a Bayesian framework to quantify covariance uncertainty from pilot samples, incorporating prior knowledge and enabling probabilistic assessments of estimator performance. A central component is a flexible $γ$-Gaussian prior that ensures computational tractability and supports efficient posterior projection under additional pilot samples. These tools enable adaptive pilot-sampling termination via an interpretable loss criterion that decomposes variance inefficiency into accuracy and cost components. While demonstrated here in the context of approximate control variates (ACV), the framework generalizes to other multi-fidelity estimators. We validate the approach on a monomial benchmark and a PDE-based Darcy flow problem. Across these tests, our adaptive method demonstrates its value for multi-fidelity estimation under limited pilot budgets and expensive models, achieving variance reduction comparable to baseline estimators with oracle covariance.