🤖 AI Summary
To address the low sampling efficiency and poor task adaptability of diffusion models in posterior sampling for inverse problems, this paper proposes a generic piecewise-guided diffusion framework. Methodologically, it introduces a differentiable piecewise guidance function that dynamically modulates guidance strength across denoising stages according to noise levels, while explicitly modeling measurement noise—eliminating the need for task-specific retraining (e.g., denoising, inpainting, or super-resolution). Its key contribution is the first integration of piecewise guidance with posterior sampling, achieving robustness at high-noise stages and high-fidelity reconstruction at low-noise stages. Evaluated across multiple image restoration tasks, the method accelerates inference by 23–25% over the PGDM baseline, with only marginal degradation in PSNR (<0.15 dB) and SSIM (<0.005), thereby significantly improving the efficiency–quality trade-off.
📝 Abstract
Diffusion models are powerful tools for sampling from high-dimensional distributions by progressively transforming pure noise into structured data through a denoising process. When equipped with a guidance mechanism, these models can also generate samples from conditional distributions. In this paper, a novel diffusion-based framework is introduced for solving inverse problems using a piecewise guidance scheme. The guidance term is defined as a piecewise function of the diffusion timestep, facilitating the use of different approximations during high-noise and low-noise phases. This design is shown to effectively balance computational efficiency with the accuracy of the guidance term. Unlike task-specific approaches that require retraining for each problem, the proposed method is problem-agnostic and readily adaptable to a variety of inverse problems. Additionally, it explicitly incorporates measurement noise into the reconstruction process. The effectiveness of the proposed framework is demonstrated through extensive experiments on image restoration tasks, specifically image inpainting and super-resolution. Using a class conditional diffusion model for recovery, compared to the pgdm baseline, the proposed framework achieves a reduction in inference time of (25%) for inpainting with both random and center masks, and (23%) and (24%) for (4 imes) and (8 imes) super-resolution tasks, respectively, while incurring only negligible loss in PSNR and SSIM.