Uncertainty Quantification for Deep Regression using Contextualised Normalizing Flows

📅 2025-11-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Quantifying uncertainty in deep regression models is critical for high-stakes decision-making, yet existing approaches either produce only prediction intervals—ignoring distributional shape—or rely on Bayesian retraining, which is computationally prohibitive. This paper proposes a post-hoc, training-free framework that introduces Contextualized Normalizing Flows (CNFs), a novel class of conditional normalizing flows conditioned on pre-trained model outputs to directly estimate the full conditional predictive distribution. CNFs flexibly capture complex distributional characteristics—including multimodality and asymmetry—while simultaneously yielding well-calibrated prediction intervals and complete probability density estimates. Extensive experiments demonstrate that our method achieves uncertainty estimation quality on par with state-of-the-art approaches, significantly enhancing both reliability and informativeness of downstream decisions without requiring model retraining or architectural modification.

Technology Category

Application Category

📝 Abstract
Quantifying uncertainty in deep regression models is important both for understanding the confidence of the model and for safe decision-making in high-risk domains. Existing approaches that yield prediction intervals overlook distributional information, neglecting the effect of multimodal or asymmetric distributions on decision-making. Similarly, full or approximated Bayesian methods, while yielding the predictive posterior density, demand major modifications to the model architecture and retraining. We introduce MCNF, a novel post hoc uncertainty quantification method that produces both prediction intervals and the full conditioned predictive distribution. MCNF operates on top of the underlying trained predictive model; thus, no predictive model retraining is needed. We provide experimental evidence that the MCNF-based uncertainty estimate is well calibrated, is competitive with state-of-the-art uncertainty quantification methods, and provides richer information for downstream decision-making tasks.
Problem

Research questions and friction points this paper is trying to address.

Quantifies uncertainty in deep regression models for safe decision-making.
Addresses limitations of existing methods that overlook distributional information.
Introduces a post hoc method without needing model retraining.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Post hoc method using contextualised normalizing flows
Generates prediction intervals and full predictive distributions
No retraining needed for underlying predictive model
🔎 Similar Papers
No similar papers found.
A
Adriel Sosa Marco
Arquimea Research Center, Spain
J
John Daniel Kirwan
Arquimea Research Center, Spain
A
Alexia Toumpa
Department of Computer Science, University of York, York, UK
Simos Gerasimou
Simos Gerasimou
Associate Professor (Senior Lecturer) in Computer Science, University of York
Self-Adaptive SystemsSoftware EngineeringAI Safety