DiT4SR: Taming Diffusion Transformer for Real-World Image Super-Resolution

📅 2025-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses real-world image super-resolution (Real-ISR), introducing the first end-to-end adaptation of large-scale diffusion Transformers (DiTs) to this task. To overcome key limitations of DiTs—including weak local modeling and coarse-grained latent-space guidance—we propose three innovations: (1) a bidirectional attention mechanism between low-resolution (LR) features and diffusion-generated latent representations, enabling fine-grained, dynamically evolving cross-stage feature fusion; (2) a lightweight cross-flow convolutional module to strengthen local texture modeling; and (3) a unified latent-space feature fusion architecture. Evaluated on multiple real-world degradation benchmarks, our method achieves state-of-the-art performance, substantially outperforming both UNet-based and ControlNet-guided diffusion approaches. It delivers superior trade-offs between reconstruction fidelity and texture consistency, demonstrating robustness to complex, unknown degradations inherent in real-world scenarios.

Technology Category

Application Category

📝 Abstract
Large-scale pre-trained diffusion models are becoming increasingly popular in solving the Real-World Image Super-Resolution (Real-ISR) problem because of their rich generative priors. The recent development of diffusion transformer (DiT) has witnessed overwhelming performance over the traditional UNet-based architecture in image generation, which also raises the question: Can we adopt the advanced DiT-based diffusion model for Real-ISR? To this end, we propose our DiT4SR, one of the pioneering works to tame the large-scale DiT model for Real-ISR. Instead of directly injecting embeddings extracted from low-resolution (LR) images like ControlNet, we integrate the LR embeddings into the original attention mechanism of DiT, allowing for the bidirectional flow of information between the LR latent and the generated latent. The sufficient interaction of these two streams allows the LR stream to evolve with the diffusion process, producing progressively refined guidance that better aligns with the generated latent at each diffusion step. Additionally, the LR guidance is injected into the generated latent via a cross-stream convolution layer, compensating for DiT's limited ability to capture local information. These simple but effective designs endow the DiT model with superior performance in Real-ISR, which is demonstrated by extensive experiments. Project Page: https://adam-duan.github.io/projects/dit4sr/.
Problem

Research questions and friction points this paper is trying to address.

Adopting DiT-based diffusion model for Real-ISR
Enhancing LR image guidance in DiT attention
Improving local information capture in DiT for Real-ISR
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates LR embeddings into DiT attention
Uses cross-stream convolution for local information
Evolves LR guidance with diffusion process
🔎 Similar Papers
No similar papers found.
Zheng-Peng Duan
Zheng-Peng Duan
Nankai University
Computer Vision
J
Jiawei Zhang
SenseTime Research
X
Xin Jin
VCIP, CS, Nankai University
Z
Ziheng Zhang
VCIP, CS, Nankai University
Z
Zheng Xiong
SenseTime Research
D
Dongqing Zou
SenseTime Research, PBVR
J
Jimmy Ren
SenseTime Research
C
Chun-Le Guo
VCIP, CS, Nankai University
Chongyi Li
Chongyi Li
Professor, Nankai University
Computer VisionComputational ImagingComputational PhotographyUnderwater Imaging