🤖 AI Summary
This work proposes Bayesian Interpolating Neural Networks (B-INN) to address the poor scalability, high computational cost, and limited reliability of existing Bayesian surrogate models in large-scale industrial simulations. B-INN uniquely integrates high-order interpolation theory and tensor decomposition into Bayesian modeling, leveraging an alternating direction optimization algorithm to construct a novel surrogate model whose function space is a subset of Gaussian processes yet achieves linear inference complexity, O(N). Compared to conventional Bayesian neural networks and Gaussian processes, B-INN maintains robust uncertainty quantification capabilities while accelerating inference by 20 to 10,000 times, substantially enhancing its practicality for industrial-scale applications such as active learning.
📝 Abstract
Neural networks and machine learning models for uncertainty quantification suffer from limited scalability and poor reliability compared to their deterministic counterparts. In industry-scale active learning settings, where generating a single high-fidelity simulation may require days or weeks of computation and produce data volumes on the order of gigabytes, they quickly become impractical. This paper proposes a scalable and reliable Bayesian surrogate model, termed the Bayesian Interpolating Neural Network (B-INN). The B-INN combines high-order interpolation theory with tensor decomposition and alternating direction algorithm to enable effective dimensionality reduction without compromising predictive accuracy. We theoretically show that the function space of a B-INN is a subset of that of Gaussian processes, while its Bayesian inference exhibits linear complexity, $\mathcal{O}(N)$, with respect to the number of training samples. Numerical experiments demonstrate that B-INNs can be from 20 times to 10,000 times faster with a robust uncertainty estimation compared to Bayesian neural networks and Gaussian processes. These capabilities make B-INN a practical foundation for uncertainty-driven active learning in large-scale industrial simulations, where computational efficiency and robust uncertainty calibration are paramount.