Towards Distribution-Shift Uncertainty Estimation for Inverse Problems with Generative Priors

📅 2025-10-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Generative models used as priors for inverse problems often produce hallucinations when confronted with out-of-distribution (OOD) test data. Existing uncertainty quantification methods either require calibration sets, lack statistical guarantees, or fail to characterize risks induced by distributional shift. Method: We propose a training-free, instance-level uncertainty metric that assesses reconstruction stability under random measurement perturbations—serving as a proxy for OOD-induced hallucination risk. Our approach integrates generative priors with learned proximal operators and introduces measurement variability in tomographic reconstruction to quantify stability. Contribution/Results: Experiments on MNIST demonstrate that a model trained exclusively on digit “0” exhibits significantly higher reconstruction variability for other digits, strongly correlated with reconstruction error. This validates the metric’s effectiveness in identifying OOD samples and associated hallucination risks—without requiring knowledge of the training distribution or additional calibration.

Technology Category

Application Category

📝 Abstract
Generative models have shown strong potential as data-driven priors for solving inverse problems such as reconstructing medical images from undersampled measurements. While these priors improve reconstruction quality with fewer measurements, they risk hallucinating features when test images lie outside the training distribution. Existing uncertainty quantification methods in this setting (i) require an in-distribution calibration dataset, which may not be available, (ii) provide heuristic rather than statistical estimates, or (iii) quantify uncertainty from model capacity or limited measurements rather than distribution shift. We propose an instance-level, calibration-free uncertainty indicator that is sensitive to distribution shift, requires no knowledge of the training distribution, and incurs no retraining cost. Our key hypothesis is that reconstructions of in-distribution images remain stable under random measurement variations, while reconstructions of out-of-distribution (OOD) images exhibit greater instability. We use this stability as a proxy for detecting distribution shift. Our proposed OOD indicator is efficiently computable for any computational imaging inverse problem; we demonstrate it on tomographic reconstruction of MNIST digits, where a learned proximal network trained only on digit "0" is evaluated on all ten digits. Reconstructions of OOD digits show higher variability and correspondingly higher reconstruction error, validating this indicator. These results suggest a deployment strategy that pairs generative priors with lightweight guardrails, enabling aggressive measurement reduction for in-distribution cases while automatically warning when priors are applied out of distribution.
Problem

Research questions and friction points this paper is trying to address.

Estimating uncertainty in inverse problems when test data differs from training distribution
Detecting distribution shift without requiring calibration datasets or retraining
Preventing hallucinated features in reconstructions using generative priors
Innovation

Methods, ideas, or system contributions that make the work stand out.

Calibration-free uncertainty indicator for distribution shift
Stability under measurement variations detects OOD images
Lightweight guardrails enable aggressive measurement reduction
🔎 Similar Papers
No similar papers found.
N
Namhoon Kim
School of Electrical and Computer Engineering, Georgia Institute of Technology
Sara Fridovich-Keil
Sara Fridovich-Keil
Assistant Professor in ECE, Georgia Tech
computational imagingmachine learningsignal processinginverse problems