🤖 AI Summary
To address the fundamental mismatch between the latent distribution of low-quality (LQ) real-world images in realistic image super-resolution (Real-ISR) and the Gaussian noise prior assumed by DDPMs and flow-matching models, this paper proposes the One Mid-timestep Guidance (OMG) framework. We first identify that latent variables at mid-diffusion timesteps empirically approximate the LQ image distribution more closely; leveraging this insight, we design a single mid-timestep guidance mechanism and introduce a Latent Distribution Refinement loss. To suppress checkerboard artifacts, we propose an Overlap-Chunked LPIPS/GAN loss. Furthermore, we accelerate high-resolution inference via a two-stage tiled VAE and diffusion pipeline. On 512×512 Real-ISR tasks, OMGSR-S/F achieves state-of-the-art trade-offs between quantitative metrics and perceptual quality. OMGSR-F (1k) significantly enhances fine-detail recovery and, for the first time in Real-ISR, robustly generates high-fidelity 2k-resolution images.
📝 Abstract
Denoising Diffusion Probabilistic Models (DDPM) and Flow Matching (FM) generative models show promising potential for one-step Real-World Image Super-Resolution (Real-ISR). Recent one-step Real-ISR models typically inject a Low-Quality (LQ) image latent distribution at the initial timestep. However, a fundamental gap exists between the LQ image latent distribution and the Gaussian noisy latent distribution, limiting the effective utilization of generative priors. We observe that the noisy latent distribution at DDPM/FM mid-timesteps aligns more closely with the LQ image latent distribution. Based on this insight, we present One Mid-timestep Guidance Real-ISR (OMGSR), a universal framework applicable to DDPM/FM-based generative models. OMGSR injects the LQ image latent distribution at a pre-computed mid-timestep, incorporating the proposed Latent Distribution Refinement loss to alleviate the latent distribution gap. We also design the Overlap-Chunked LPIPS/GAN loss to eliminate checkerboard artifacts in image generation. Within this framework, we instantiate OMGSR for DDPM/FM-based generative models with two variants: OMGSR-S (SD-Turbo) and OMGSR-F (FLUX.1-dev). Experimental results demonstrate that OMGSR-S/F achieves balanced/excellent performance across quantitative and qualitative metrics at 512-resolution. Notably, OMGSR-F establishes overwhelming dominance in all reference metrics. We further train a 1k-resolution OMGSR-F to match the default resolution of FLUX.1-dev, which yields excellent results, especially in the details of the image generation. We also generate 2k-resolution images by the 1k-resolution OMGSR-F using our two-stage Tiled VAE & Diffusion.