DCTdiff: Intriguing Properties of Image Generative Modeling in the DCT Space

πŸ“… 2024-12-19
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 1
✨ Influential: 1
πŸ“„ PDF
πŸ€– AI Summary
To address the high computational cost and resolution limitations of image generation models operating in the pixel or latent space, this paper proposes DCTdiffβ€”the first end-to-end diffusion framework that directly models and denoises in the discrete cosine transform (DCT) frequency domain. Theoretically, we establish that the diffusion process is equivalent to autoregressive modeling in the spectral domain. Methodologically, DCTdiff eliminates the need for VAE-based latent spaces, adapts UViT/DiT architectures, and integrates multiple samplers to enable full-stack generation entirely within the frequency domain. Experiments demonstrate that DCTdiff achieves superior FID scores over Latent Diffusion (SD-VAE) at 512Γ—512 resolution while reducing training cost to only one-quarter. Furthermore, our analysis systematically uncovers critical design principles and intrinsic properties of spectral-domain modeling.

Technology Category

Application Category

πŸ“ Abstract
This paper explores image modeling from the frequency space and introduces DCTdiff, an end-to-end diffusion generative paradigm that efficiently models images in the discrete cosine transform (DCT) space. We investigate the design space of DCTdiff and reveal the key design factors. Experiments on different frameworks (UViT, DiT), generation tasks, and various diffusion samplers demonstrate that DCTdiff outperforms pixel-based diffusion models regarding generative quality and training efficiency. Remarkably, DCTdiff can seamlessly scale up to 512$ imes$512 resolution without using the latent diffusion paradigm and beats latent diffusion (using SD-VAE) with only 1/4 training cost. Finally, we illustrate several intriguing properties of DCT image modeling. For example, we provide a theoretical proof of why 'image diffusion can be seen as spectral autoregression', bridging the gap between diffusion and autoregressive models. The effectiveness of DCTdiff and the introduced properties suggest a promising direction for image modeling in the frequency space. The code is https://github.com/forever208/DCTdiff.
Problem

Research questions and friction points this paper is trying to address.

Explores image modeling in frequency space using DCT
Proposes DCTdiff for better quality and efficiency
Bridges diffusion and autoregressive models theoretically
Innovation

Methods, ideas, or system contributions that make the work stand out.

DCTdiff models images in frequency space
Outperforms pixel-based diffusion models
Scales to 512x512 resolution efficiently
πŸ”Ž Similar Papers
No similar papers found.
Mang Ning
Mang Ning
PhD candidate, Utrecht University
deep learninggenerative models
M
Mingxiao Li
KU Leuven, Belgium
Jianlin Su
Jianlin Su
Moonshot AI
H
Haozhe Jia
Shandong University, China
L
Lanmiao Liu
Utrecht University, the Netherlands
M
Martin Benevs
University of Innsbruck, Austria
A
A. A. Salah
Utrecht University, the Netherlands
Itir Onal Ertugrul
Itir Onal Ertugrul
Assistant Professor at Utrecht University
Affective computingMachine learning