Bayesian Interpolating Neural Network (B-INN): a scalable and reliable Bayesian model for large-scale physical systems

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes Bayesian Interpolating Neural Networks (B-INN) to address the poor scalability, high computational cost, and limited reliability of existing Bayesian surrogate models in large-scale industrial simulations. B-INN uniquely integrates high-order interpolation theory and tensor decomposition into Bayesian modeling, leveraging an alternating direction optimization algorithm to construct a novel surrogate model whose function space is a subset of Gaussian processes yet achieves linear inference complexity, O(N). Compared to conventional Bayesian neural networks and Gaussian processes, B-INN maintains robust uncertainty quantification capabilities while accelerating inference by 20 to 10,000 times, substantially enhancing its practicality for industrial-scale applications such as active learning.

Technology Category

Application Category

📝 Abstract
Neural networks and machine learning models for uncertainty quantification suffer from limited scalability and poor reliability compared to their deterministic counterparts. In industry-scale active learning settings, where generating a single high-fidelity simulation may require days or weeks of computation and produce data volumes on the order of gigabytes, they quickly become impractical. This paper proposes a scalable and reliable Bayesian surrogate model, termed the Bayesian Interpolating Neural Network (B-INN). The B-INN combines high-order interpolation theory with tensor decomposition and alternating direction algorithm to enable effective dimensionality reduction without compromising predictive accuracy. We theoretically show that the function space of a B-INN is a subset of that of Gaussian processes, while its Bayesian inference exhibits linear complexity, $\mathcal{O}(N)$, with respect to the number of training samples. Numerical experiments demonstrate that B-INNs can be from 20 times to 10,000 times faster with a robust uncertainty estimation compared to Bayesian neural networks and Gaussian processes. These capabilities make B-INN a practical foundation for uncertainty-driven active learning in large-scale industrial simulations, where computational efficiency and robust uncertainty calibration are paramount.
Problem

Research questions and friction points this paper is trying to address.

uncertainty quantification
scalability
reliability
large-scale physical systems
Bayesian surrogate model
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bayesian Interpolating Neural Network
tensor decomposition
uncertainty quantification
scalable Bayesian inference
active learning
🔎 Similar Papers
No similar papers found.