🤖 AI Summary
This study investigates whether diffusion priors that are mismatched to the target signal distribution or of low fidelity can still effectively recover signals in image inverse problems. By establishing a theoretical framework grounded in Bayesian consistency, the authors demonstrate that when observational information is sufficiently rich—such as in high-dimensional settings or when a large number of pixels are known—the posterior distribution concentrates around the true signal, enabling weak priors to perform nearly as well as strong, domain-matched ones. Experimental results validate this mechanism and delineate the boundary conditions under which weak priors fail. This work provides the first systematic characterization of the robustness conditions for diffusion priors under distributional shift, offering theoretical guidance for prior selection in practical applications.
📝 Abstract
Can a diffusion model trained on bedrooms recover human faces? Diffusion models are widely used as priors for inverse problems, but standard approaches usually assume a high-fidelity model trained on data that closely match the unknown signal. In practice, one often must use a mismatched or low-fidelity diffusion prior. Surprisingly, these weak priors often perform nearly as well as full-strength, in-domain baselines. We study when and why inverse solvers are robust to weak diffusion priors. Through extensive experiments, we find that weak priors succeed when measurements are highly informative (e.g., many observed pixels), and we identify regimes where they fail. Our theory, based on Bayesian consistency, gives conditions under which high-dimensional measurements make the posterior concentrate near the true signal. These results provide a principled justification on when weak diffusion priors can be used reliably.