Distributional Consistency Loss: Beyond Pointwise Data Terms in Inverse Problems

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In inverse problems, conventional pointwise data-fidelity losses (e.g., MSE) are prone to overfitting noise, degrading reconstruction quality. To address this, we propose a Distribution Consistency (DC) loss that statistically assesses the agreement between residuals and the theoretical noise distribution via hypothesis testing—elevating data fitting from pixel-wise matching to distribution-level calibration. Unlike conventional approaches, DC loss requires no explicit prior modeling and serves as a drop-in replacement for standard fidelity terms. Evaluated on image denoising and medical imaging tasks, DC loss consistently improves PSNR, eliminates reliance on early stopping, suppresses iterative artifacts, and enhances regularization robustness. Its core innovation lies in replacing pointwise fidelity with probabilistic scoring–driven distributional consistency—the first such formulation enabling noise-robust, prior-free inverse problem solving.

Technology Category

Application Category

📝 Abstract
Recovering true signals from noisy measurements is a central challenge in inverse problems spanning medical imaging, geophysics, and signal processing. Current solutions balance prior assumptions regarding the true signal (regularization) with agreement to noisy measured data (data-fidelity). Conventional data-fidelity loss functions, such as mean-squared error (MSE) or negative log-likelihood, seek pointwise agreement with noisy measurements, often leading to overfitting to noise. In this work, we instead evaluate data-fidelity collectively by testing whether the observed measurements are statistically consistent with the noise distributions implied by the current estimate. We adopt this aggregated perspective and introduce distributional consistency (DC) loss, a data-fidelity objective that replaces pointwise matching with distribution-level calibration using model-based probability scores for each measurement. DC loss acts as a direct and practical plug-in replacement for standard data consistency terms: i) it is compatible with modern regularizers, ii) it is optimized in the same way as traditional losses, and iii) it avoids overfitting to measurement noise even without the use of priors. Its scope naturally fits many practical inverse problems where the measurement-noise distribution is known and where the measured dataset consists of many independent noisy values. We demonstrate efficacy in two key example application areas: i) in image denoising with deep image prior, using DC instead of MSE loss removes the need for early stopping and achieves higher PSNR; ii) in medical image reconstruction from Poisson-noisy data, DC loss reduces artifacts in highly-iterated reconstructions and enhances the efficacy of hand-crafted regularization. These results position DC loss as a statistically grounded, performance-enhancing alternative to conventional fidelity losses for inverse problems.
Problem

Research questions and friction points this paper is trying to address.

Replacing pointwise data agreement with distribution-level statistical consistency
Preventing overfitting to measurement noise without requiring early stopping
Improving reconstruction quality in inverse problems with known noise distributions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Replaces pointwise matching with distribution-level calibration
Uses model-based probability scores for measurements
Acts as plug-in replacement for standard data consistency
🔎 Similar Papers
No similar papers found.
George Webber
George Webber
PhD student, King's College London
Inverse problemsMedical image reconstructionScore-based generative modelsDeep learning
A
Andrew J. Reader
School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom