Adversarial Diffusion Compression for Real-World Image Super-Resolution

📅 2024-11-20
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion-based methods for real-world image super-resolution (Real-ISR) suffer from slow inference and high computational overhead, hindering practical deployment. Method: This paper proposes AdcSR—the first Adversarial Diffusion Compression (ADC) framework tailored for Real-ISR—integrating knowledge distillation with GAN principles. It systematically prunes redundant structures from OSEDiff via modular pruning, restores pre-trained VAE decoder capabilities, and employs a diffusion-GAN hybrid generative modeling strategy to reconstruct expressive power. Contribution/Results: AdcSR enables single-step efficient reconstruction, achieving state-of-the-art (SOTA) perceptual quality on both synthetic and real-world benchmarks. It accelerates inference by 9.3×, reduces FLOPs by 78%, and decreases model parameters by 74%, significantly improving efficiency without compromising fidelity.

Technology Category

Application Category

📝 Abstract
Real-world image super-resolution (Real-ISR) aims to reconstruct high-resolution images from low-resolution inputs degraded by complex, unknown processes. While many Stable Diffusion (SD)-based Real-ISR methods have achieved remarkable success, their slow, multi-step inference hinders practical deployment. Recent SD-based one-step networks like OSEDiff and S3Diff alleviate this issue but still incur high computational costs due to their reliance on large pretrained SD models. This paper proposes a novel Real-ISR method, AdcSR, by distilling the one-step diffusion network OSEDiff into a streamlined diffusion-GAN model under our Adversarial Diffusion Compression (ADC) framework. We meticulously examine the modules of OSEDiff, categorizing them into two types: (1) Removable (VAE encoder, prompt extractor, text encoder, etc.) and (2) Prunable (denoising UNet and VAE decoder). Since direct removal and pruning can degrade the model's generation capability, we pretrain our pruned VAE decoder to restore its ability to decode images and employ adversarial distillation to compensate for performance loss. This ADC-based diffusion-GAN hybrid design effectively reduces complexity by 73% in inference time, 78% in computation, and 74% in parameters, while preserving the model's generation capability. Experiments manifest that our proposed AdcSR achieves competitive recovery quality on both synthetic and real-world datasets, offering up to 9.3$ imes$ speedup over previous one-step diffusion-based methods. Code and models are available at https://github.com/Guaishou74851/AdcSR.
Problem

Research questions and friction points this paper is trying to address.

Real-world image super-resolution from complex degraded inputs
High computational cost in one-step diffusion-based methods
Balancing performance and efficiency in diffusion-GAN hybrid models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distills OSEDiff into diffusion-GAN model
Prunes and removes redundant modules efficiently
Uses adversarial distillation to maintain performance
🔎 Similar Papers
No similar papers found.
B
Bin Chen
Peking University, OPPO Research Institute
G
Gehui Li
Peking University
Rongyuan Wu
Rongyuan Wu
The Hong Kong Polytechnic University
Computational PhotographyGenerative Models
Xindong Zhang
Xindong Zhang
OPPO
super resolutionmobile AImodel acceleration
J
Jie Chen
Peking University
J
Jian Zhang
Peking University
L
Lei Zhang
The Hong Kong Polytechnic University, OPPO Research Institute