🤖 AI Summary
Real-world image restoration faces two fundamental challenges: inaccurate modeling of image priors and difficulty in precisely characterizing degradation processes—especially in practical scenarios where explicit degradation modeling is often constrained by restrictive parametric assumptions. To address this, we propose Noise Density-Guided Restoration (NDGR), a novel zero-shot, training-free, and degradation-agnostic paradigm. NDGR leverages a pre-trained diffusion model to perform deterministic mapping in the latent space, optimizing standard Gaussian noise density via gradient-guided inversion—steering input noise toward high-probability density regions without explicit degradation modeling. This is the first method to unify fully blind (unknown degradation type and parameters) and partially blind (known degradation type but unknown parameters) restoration under a single, degradation-independent framework. Extensive experiments demonstrate state-of-the-art performance across diverse real-world degradations—including blur, noise, and compression—with strong generalization capability, requiring neither fine-tuning nor dedicated degradation estimation modules.
📝 Abstract
Two of the main challenges of image restoration in real-world scenarios are the accurate characterization of an image prior and the precise modeling of the image degradation operator. Pre-trained diffusion models have been very successfully used as image priors in zero-shot image restoration methods. However, how to best handle the degradation operator is still an open problem. In real-world data, methods that rely on specific parametric assumptions about the degradation model often face limitations in their applicability. To address this, we introduce Invert2Restore, a zero-shot, training-free method that operates in both fully blind and partially blind settings -- requiring no prior knowledge of the degradation model or only partial knowledge of its parametric form without known parameters. Despite this, Invert2Restore achieves high-fidelity results and generalizes well across various types of image degradation. It leverages a pre-trained diffusion model as a deterministic mapping between normal samples and undistorted image samples. The key insight is that the input noise mapped by a diffusion model to a degraded image lies in a low-probability density region of the standard normal distribution. Thus, we can restore the degraded image by carefully guiding its input noise toward a higher-density region. We experimentally validate Invert2Restore across several image restoration tasks, demonstrating that it achieves state-of-the-art performance in scenarios where the degradation operator is either unknown or partially known.