🤖 AI Summary
This work addresses the low prediction accuracy and poor quantifiability of uncertainty in critical heat flux (CHF) estimation under nuclear reactor dryout conditions. We propose a hybrid modeling framework that synergistically integrates physics-informed modeling with uncertainty-aware learning. Methodologically, we combine the Biasi/Bowring semi-empirical physical model with three uncertainty quantification techniques—deep neural network (DNN) ensembles, Bayesian neural networks, and deep Gaussian processes—and conduct the first systematic performance evaluation of these approaches in physics-informed CHF modeling. Results show that the Biasi+DNN ensemble achieves a 1.846% mean relative error under sufficient data and exhibits well-calibrated predictive uncertainty. All hybrid models significantly outperform purely data-driven baselines, effectively mitigating generalization and reliability bottlenecks in low-data regimes. The framework establishes a new paradigm for CHF prediction that is high-accuracy, physically interpretable, and robust—thereby advancing nuclear safety analysis.
📝 Abstract
Critical heat flux is a key quantity in boiling system modeling due to its impact on heat transfer and component temperature and performance. This study investigates the development and validation of an uncertainty-aware hybrid modeling approach that combines machine learning with physics-based models in the prediction of critical heat flux in nuclear reactors for cases of dryout. Two empirical correlations, Biasi and Bowring, were employed with three machine learning uncertainty quantification techniques: deep neural network ensembles, Bayesian neural networks, and deep Gaussian processes. A pure machine learning model without a base model served as a baseline for comparison. This study examines the performance and uncertainty of the models under both plentiful and limited training data scenarios using parity plots, uncertainty distributions, and calibration curves. The results indicate that the Biasi hybrid deep neural network ensemble achieved the most favorable performance (with a mean absolute relative error of 1.846% and stable uncertainty estimates), particularly in the plentiful data scenario. The Bayesian neural network models showed slightly higher error and uncertainty but superior calibration. By contrast, deep Gaussian process models underperformed by most metrics. All hybrid models outperformed pure machine learning configurations, demonstrating resistance against data scarcity.