🤖 AI Summary
This work identifies that Diffusion Posterior Sampling (DPS) in image generation effectively approximates Maximum A Posteriori (MAP) estimation rather than conditional score matching, leading to substantial reconstruction bias and low sample diversity. To address this, we propose a retraining-free, end-to-end posterior maximization framework. First, we theoretically establish that DPS inherently performs MAP optimization under standard assumptions. Second, we design a lightweight conditional score estimator—trained on only 100 images in ≈8 GPU-hours—that enables accurate posterior gradient computation. Third, we introduce multi-step gradient ascent coupled with explicit projection constraints to ensure stable and robust posterior maximization. Evaluated on 512×512 ImageNet, our method significantly improves reconstruction fidelity, reduces reliance on strong conditional diffusion models, and maintains computational efficiency. The implementation is publicly available.
📝 Abstract
Recent advancements in diffusion models have been leveraged to address inverse problems without additional training, and Diffusion Posterior Sampling (DPS) (Chung et al., 2022a) is among the most popular approaches. Previous analyses suggest that DPS accomplishes posterior sampling by approximating the conditional score. While in this paper, we demonstrate that the conditional score approximation employed by DPS is not as effective as previously assumed, but rather aligns more closely with the principle of maximizing a posterior (MAP). This assertion is substantiated through an examination of DPS on 512x512 ImageNet images, revealing that: 1) DPS's conditional score estimation significantly diverges from the score of a well-trained conditional diffusion model and is even inferior to the unconditional score; 2) The mean of DPS's conditional score estimation deviates significantly from zero, rendering it an invalid score estimation; 3) DPS generates high-quality samples with significantly lower diversity. In light of the above findings, we posit that DPS more closely resembles MAP than a conditional score estimator, and accordingly propose the following enhancements to DPS: 1) we explicitly maximize the posterior through multi-step gradient ascent and projection; 2) we utilize a light-weighted conditional score estimator trained with only 100 images and 8 GPU hours. Extensive experimental results indicate that these proposed improvements significantly enhance DPS's performance. The source code for these improvements is provided in https://github.com/tongdaxu/Rethinking-Diffusion-Posterior-Sampling-From-Conditional-Score-Estimator-to-Maximizing-a-Posterior.