ReGuidance: A Simple Diffusion Wrapper for Boosting Sample Quality on Hard Inverse Problems

📅 2025-06-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
For challenging inverse problems—such as low-SNR image restoration and high-magnification super-resolution—existing training-free methods (e.g., Diffusion Posterior Sampling, DPS) often deviate from the data manifold due to sparse reward signals, yielding implausible reconstructions. This paper proposes ReGuidance: a training-free diffusion model encapsulation framework that reconstructs latent variables via probability flow ODE inversion and initializes DPS accordingly, jointly improving solution realism and measurement consistency. We provide the first rigorous theoretical guarantee for DPS, proving enhanced convergence under multimodal distributions; simultaneously optimize both reward value and manifold proximity; and require only black-box access to a pre-trained diffusion model—no retraining. Experiments demonstrate that ReGuidance significantly outperforms state-of-the-art methods on large-region inpainting and high-factor super-resolution, achieving concurrent gains in visual quality and measurement fidelity.

Technology Category

Application Category

📝 Abstract
There has been a flurry of activity around using pretrained diffusion models as informed data priors for solving inverse problems, and more generally around steering these models using reward models. Training-free methods like diffusion posterior sampling (DPS) and its many variants have offered flexible heuristic algorithms for these tasks, but when the reward is not informative enough, e.g., in hard inverse problems with low signal-to-noise ratio, these techniques veer off the data manifold, failing to produce realistic outputs. In this work, we devise a simple wrapper, ReGuidance, for boosting both the sample realism and reward achieved by these methods. Given a candidate solution $hat{x}$ produced by an algorithm of the user's choice, we propose inverting the solution by running the unconditional probability flow ODE in reverse starting from $hat{x}$, and then using the resulting latent as an initialization for DPS. We evaluate our wrapper on hard inverse problems like large box in-painting and super-resolution with high upscaling. Whereas state-of-the-art baselines visibly fail, we find that applying our wrapper on top of these baselines significantly boosts sample quality and measurement consistency. We complement these findings with theory proving that on certain multimodal data distributions, ReGuidance simultaneously boosts the reward and brings the candidate solution closer to the data manifold. To our knowledge, this constitutes the first rigorous algorithmic guarantee for DPS.
Problem

Research questions and friction points this paper is trying to address.

Improves sample quality in hard inverse problems
Enhances realism and reward in diffusion models
Addresses low signal-to-noise ratio challenges
Innovation

Methods, ideas, or system contributions that make the work stand out.

Wrapper enhances diffusion model sample quality
Inverts solutions using unconditional probability flow
Improves hard inverse problems like in-painting
🔎 Similar Papers
No similar papers found.