🤖 AI Summary
This work addresses the challenge of efficiently and accurately quantifying uncertainty in Monte Carlo Dropout (MC Dropout) models under limited computational budgets. It introduces, for the first time, a multilevel Monte Carlo (MLMC) framework into MC Dropout, proposing a cross-fidelity dropout mask reuse strategy that constructs coupled coarse-to-fine estimators. This approach yields unbiased estimates of predictive mean and variance with substantially reduced variance. By integrating physics-informed neural networks (PINNs) with the Uzawa algorithm, the method is evaluated on forward and inverse PINNs-Uzawa benchmark problems, empirically validating the theoretically predicted variance decay rates. Compared to conventional single-level MC Dropout, the proposed MLMC-based scheme achieves significantly higher estimation efficiency at equivalent computational cost.
📝 Abstract
We develop a multilevel Monte Carlo (MLMC) framework for uncertainty quantification with Monte Carlo dropout. Treating dropout masks as a source of epistemic randomness, we define a fidelity hierarchy by the number of stochastic forward passes used to estimate predictive moments. We construct coupled coarse--fine estimators by reusing dropout masks across fidelities, yielding telescoping MLMC estimators for both predictive means and predictive variances that remain unbiased for the corresponding dropout-induced quantities while reducing sampling variance at fixed evaluation budget. We derive explicit bias, variance and effective cost expressions, together with sample-allocation rules across levels. Numerical experiments on forward and inverse PINNs--Uzawa benchmarks confirm the predicted variance rates and demonstrate efficiency gains over single-level MC-dropout at matched cost.