DeCo: Frequency-Decoupled Pixel Diffusion for End-to-End Image Generation

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing pixel-based diffusion models suffer from low training/inference efficiency and limited generation quality due to coupling high-frequency detail modeling with low-frequency semantic modeling within a single DiT architecture. This paper proposes DeCo, a frequency-decoupled pixel diffusion framework: it pioneers the separation of semantic modeling (handled by a DiT) from high-frequency detail reconstruction (performed by a lightweight pixel decoder), and introduces a frequency-aware flow matching loss to explicitly guide the DiT toward low-frequency semantic representation while enabling the decoder to efficiently recover structural details. This paradigm significantly improves end-to-end generation efficiency and fidelity. On ImageNet, DeCo achieves state-of-the-art performance—attaining FID scores of 1.62 (256×256) and 2.22 (512×512) with fewer parameters, and a GenEval score of 0.86—demonstrating superior overall generation quality and efficiency.

Technology Category

Application Category

📝 Abstract
Pixel diffusion aims to generate images directly in pixel space in an end-to-end fashion. This approach avoids the limitations of VAE in the two-stage latent diffusion, offering higher model capacity. Existing pixel diffusion models suffer from slow training and inference, as they usually model both high-frequency signals and low-frequency semantics within a single diffusion transformer (DiT). To pursue a more efficient pixel diffusion paradigm, we propose the frequency-DeCoupled pixel diffusion framework. With the intuition to decouple the generation of high and low frequency components, we leverage a lightweight pixel decoder to generate high-frequency details conditioned on semantic guidance from the DiT. This thus frees the DiT to specialize in modeling low-frequency semantics. In addition, we introduce a frequency-aware flow-matching loss that emphasizes visually salient frequencies while suppressing insignificant ones. Extensive experiments show that DeCo achieves superior performance among pixel diffusion models, attaining FID of 1.62 (256x256) and 2.22 (512x512) on ImageNet, closing the gap with latent diffusion methods. Furthermore, our pretrained text-to-image model achieves a leading overall score of 0.86 on GenEval in system-level comparison. Codes are publicly available at https://github.com/Zehong-Ma/DeCo.
Problem

Research questions and friction points this paper is trying to address.

Existing pixel diffusion models suffer from slow training and inference
Current models mix high-frequency signals and low-frequency semantics in one DiT
DeCo aims to close performance gap between pixel and latent diffusion methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decouples high and low frequency generation using DiT
Employs lightweight pixel decoder for high-frequency details
Uses frequency-aware flow-matching loss for visual emphasis
🔎 Similar Papers
No similar papers found.
Z
Zehong Ma
State Key Laboratory of Multimedia Information Processing, School of Computer Science, Peking University
Longhui Wei
Longhui Wei
Senior Researcher, Huawei
multimodal&Visual pre-trainingVLMMultimodal Generation
S
Shuai Wang
Nanjing University
Shiliang Zhang
Shiliang Zhang
Department of Computer Science, School of EECS, Peking University
Multimedia Information RetrievalMultimedia SystemsVisual Search
Q
Qi Tian
Huawei Inc.