Timestep-Aware Diffusion Model for Extreme Image Rescaling

📅 2024-08-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Extreme image scaling suffers from severe semantic structural distortion and poor texture fidelity. To address this, we propose a bidirectional reversible rescaling framework operating in the latent space of a pretrained autoencoder, pioneering the integration of text-to-image diffusion priors into extreme-scale (×16/×32) reconstruction. Our method introduces a pseudo-invertible latent-space mapping module and an adaptive timestep alignment strategy to dynamically allocate the generative capacity of diffusion models according to spatially non-uniform degradation. It synergistically combines VAE-based implicit modeling, fine-tuned Stable Diffusion, temporal-aware noise scheduling, and conditional guided reconstruction. Experiments demonstrate that our approach surpasses state-of-the-art methods by 2.1 dB in PSNR and 0.032 in SSIM, yielding visually more realistic and coherent results. Moreover, it enables end-to-end reversible encoding and decoding.

Technology Category

Application Category

📝 Abstract
Image rescaling aims to learn the optimal low-resolution (LR) image that can be accurately reconstructed to its original high-resolution (HR) counterpart, providing an efficient image processing and storage method for ultra-high definition media. However, extreme downscaling factors pose significant challenges to the upscaling process due to its highly ill-posed nature, causing existing image rescaling methods to struggle in generating semantically correct structures and perceptual friendly textures. In this work, we propose a novel framework called Timestep-Aware Diffusion Model (TADM) for extreme image rescaling, which performs rescaling operations in the latent space of a pre-trained autoencoder and effectively leverages powerful natural image priors learned by a pre-trained text-to-image diffusion model. Specifically, TADM adopts a pseudo-invertible module to establish the bidirectional mapping between the latent features of the HR image and the target-sized LR image. Then, the rescaled latent features are enhanced by a pre-trained diffusion model to generate more faithful details. Considering the spatially non-uniform degradation caused by the rescaling operation, we propose a novel time-step alignment strategy, which can adaptively allocate the generative capacity of the diffusion model based on the quality of the reconstructed latent features. Extensive experiments demonstrate the superiority of TADM over previous methods in both quantitative and qualitative evaluations.
Problem

Research questions and friction points this paper is trying to address.

Extreme image rescaling challenges due to ill-posed nature
Generating semantically correct structures and perceptual textures
Spatially non-uniform degradation in rescaling operations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Timestep-Aware Diffusion Model (TADM)
Pseudo-invertible module for bidirectional mapping
Time-step alignment strategy for adaptive enhancement
🔎 Similar Papers
No similar papers found.
Ce Wang
Ce Wang
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
W
Wanjie Sun
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China
Z
Zhenzhong Chen
School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China