Uncertainty Quantification for Large-Scale Deep Networks via Post-StoNet Modeling

📅 2025-08-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Accurately quantifying prediction uncertainty in large-scale deep neural networks (DNNs) remains challenging. To address this, we propose a post-hoc uncertainty calibration method based on stochastic neural networks (StoNets): the pre-trained DNN’s final-layer outputs serve as inputs to a lightweight StoNet with sparsity-inducing penalties, which is then fine-tuned to produce well-calibrated, high-accuracy prediction intervals. This work introduces StoNets into the DNN uncertainty post-processing framework for the first time and establishes theoretical consistency guarantees for sparse parameter estimation—thereby extending the theoretical foundations of sparse learning in deep models. Empirical evaluations across multiple benchmark datasets demonstrate that our method yields significantly shorter prediction intervals than conformal prediction while maintaining validity (honesty), and achieves substantially lower calibration error compared to state-of-the-art posterior calibration approaches.

Technology Category

Application Category

📝 Abstract
Deep learning has revolutionized modern data science. However, how to accurately quantify the uncertainty of predictions from large-scale deep neural networks (DNNs) remains an unresolved issue. To address this issue, we introduce a novel post-processing approach. This approach feeds the output from the last hidden layer of a pre-trained large-scale DNN model into a stochastic neural network (StoNet), then trains the StoNet with a sparse penalty on a validation dataset and constructs prediction intervals for future observations. We establish a theoretical guarantee for the validity of this approach; in particular, the parameter estimation consistency for the sparse StoNet is essential for the success of this approach. Comprehensive experiments demonstrate that the proposed approach can construct honest confidence intervals with shorter interval lengths compared to conformal methods and achieves better calibration compared to other post-hoc calibration techniques. Additionally, we show that the StoNet formulation provides us with a platform to adapt sparse learning theory and methods from linear models to DNNs.
Problem

Research questions and friction points this paper is trying to address.

Quantify uncertainty in large-scale deep neural networks
Develop post-processing method using stochastic neural networks
Improve prediction intervals and calibration accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Post-StoNet modeling for uncertainty quantification
Sparse penalty training on validation dataset
Adapting sparse learning theory to DNNs
🔎 Similar Papers
No similar papers found.