🤖 AI Summary
Solving ill-posed inverse problems faces key bottlenecks: difficulty in prior modeling and reliance on fine-tuning generative models. To address these, this paper proposes a training-free method—Latent-Space Wasserstein Gradient Flow (DWGF)—that leverages pre-trained latent diffusion models (e.g., Stable Diffusion) as fixed, expressive priors. DWGF performs posterior sampling by minimizing the KL divergence between the posterior and the prior via Wasserstein gradient flow directly in the latent space, ensuring efficiency and stability without model adaptation. This work is the first to embed latent diffusion models into a gradient flow framework, eliminating the need for fine-tuning or auxiliary training while preserving strong prior expressivity and computational efficiency. Extensive experiments on multiple image inverse problem benchmarks demonstrate that DWGF significantly outperforms existing unsupervised and fine-tuned methods in both reconstruction quality (PSNR/SSIM) and convergence speed, validating its generalizability and practical utility.
📝 Abstract
Solving ill-posed inverse problems requires powerful and flexible priors. We propose leveraging pretrained latent diffusion models for this task through a new training-free approach, termed Diffusion-regularized Wasserstein Gradient Flow (DWGF). Specifically, we formulate the posterior sampling problem as a regularized Wasserstein gradient flow of the Kullback-Leibler divergence in the latent space. We demonstrate the performance of our method on standard benchmarks using StableDiffusion (Rombach et al., 2022) as the prior.