Practical multi-fidelity machine learning: fusion of deterministic and Bayesian models

📅 2024-07-21
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multi-fidelity modeling faces the core challenge of scarce and expensive high-fidelity (HF) data versus abundant yet biased low-fidelity (LF) data. This paper proposes a generic three-stage framework: (1) using a deterministic LF model as the foundation, (2) enabling cross-fidelity knowledge transfer via transfer learning, and (3) quantifying residual uncertainty through Bayesian residual modeling. It is the first to reveal the complementary expressive power between transfer learning and Bayesian modeling, unifying treatment of both noisy and noise-free multi-fidelity settings while substantially simplifying existing approaches. Technically, it supports flexible combinations—e.g., kernel ridge regression or deep neural networks for LF modeling, and Gaussian processes or Bayesian neural networks for HF modeling—under a staged training strategy. Extensive benchmark experiments demonstrate significant improvements over state-of-the-art methods in prediction accuracy, uncertainty calibration, and computational efficiency, achieving both theoretical rigor and engineering practicality.

Technology Category

Application Category

📝 Abstract
Multi-fidelity machine learning methods address the accuracy-efficiency trade-off by integrating scarce, resource-intensive high-fidelity data with abundant but less accurate low-fidelity data. We propose a practical multi-fidelity strategy for problems spanning low- and high-dimensional domains, integrating a non-probabilistic regression model for the low-fidelity with a Bayesian model for the high-fidelity. The models are trained in a staggered scheme, where the low-fidelity model is transfer-learned to the high-fidelity data and a Bayesian model is trained for the residual. This three-model strategy -- deterministic low-fidelity, transfer learning, and Bayesian residual -- leads to a prediction that includes uncertainty quantification both for noisy and noiseless multi-fidelity data. The strategy is general and unifies the topic, highlighting the expressivity trade-off between the transfer-learning and Bayesian models (a complex transfer-learning model leads to a simpler Bayesian model, and vice versa). We propose modeling choices for two scenarios, and argue in favor of using a linear transfer-learning model that fuses 1) kernel ridge regression for low-fidelity with Gaussian processes for high-fidelity; or 2) deep neural network for low-fidelity with a Bayesian neural network for high-fidelity. We demonstrate the effectiveness and efficiency of the proposed strategies and contrast them with the state-of-the-art based on various numerical examples. The simplicity of these formulations makes them practical for a broad scope of future engineering applications.
Problem

Research questions and friction points this paper is trying to address.

Integrating scarce high-fidelity data with abundant low-fidelity data.
Balancing accuracy and efficiency in multi-fidelity machine learning.
Providing uncertainty quantification for noisy and noiseless multi-fidelity data.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fuses deterministic low-fidelity and Bayesian high-fidelity models
Uses transfer-learning and Bayesian residual modeling
Combines kernel ridge or DNN with Gaussian processes or BNN
🔎 Similar Papers
No similar papers found.