Unsupervised Detection of Distribution Shift in Inverse Problems using Diffusion Models

📅 2025-05-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion model priors in imaging inverse problems suffer performance degradation due to train-test distribution shift, and existing unsupervised shift estimation methods require access to clean test images—rendering them inapplicable in real-world inverse settings. Method: We propose the first unsupervised distribution shift estimator that relies solely on indirect (corrupted) measurement data and a pre-trained diffusion model’s score function. Theoretically, we prove that our proposed metric provides an unbiased estimate of the KL divergence between training and test image distributions. We further introduce a score alignment optimization strategy to enable statistical inference and KL divergence modeling directly from score functions. Results: On denoising, deblurring, and CT reconstruction tasks, our method achieves high-fidelity approximation of the true KL divergence using only corrupted measurements, leading to significant improvements in reconstruction quality without requiring ground-truth test images.

Technology Category

Application Category

📝 Abstract
Diffusion models are widely used as priors in imaging inverse problems. However, their performance often degrades under distribution shifts between the training and test-time images. Existing methods for identifying and quantifying distribution shifts typically require access to clean test images, which are almost never available while solving inverse problems (at test time). We propose a fully unsupervised metric for estimating distribution shifts using only indirect (corrupted) measurements and score functions from diffusion models trained on different datasets. We theoretically show that this metric estimates the KL divergence between the training and test image distributions. Empirically, we show that our score-based metric, using only corrupted measurements, closely approximates the KL divergence computed from clean images. Motivated by this result, we show that aligning the out-of-distribution score with the in-distribution score -- using only corrupted measurements -- reduces the KL divergence and leads to improved reconstruction quality across multiple inverse problems.
Problem

Research questions and friction points this paper is trying to address.

Detect distribution shifts in inverse problems unsupervised
Estimate KL divergence using corrupted measurements and scores
Improve reconstruction by aligning out-of-distribution scores
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unsupervised metric estimates distribution shift
Uses corrupted measurements and score functions
Aligns out-of-distribution scores for better reconstruction
🔎 Similar Papers
No similar papers found.