🤖 AI Summary
Existing pixel-level diffusion models are prone to interference from high-dimensional perceptually irrelevant signals, leading to inferior performance compared to latent diffusion approaches. This work proposes an end-to-end, pure pixel-space diffusion framework that eliminates the need for variational autoencoders (VAEs) and latent representations. By jointly optimizing LPIPS and DINO perceptual losses, the model is explicitly guided to learn both local textures and global semantic structures. Despite its architectural simplicity, the method achieves substantial gains in generation quality, attaining a state-of-the-art FID of 5.11 on ImageNet-256 in just 80 training epochs without classifier guidance. It also demonstrates strong performance in text-to-image synthesis, achieving a GenEval score of 0.79.
📝 Abstract
Pixel diffusion generates images directly in pixel space in an end-to-end manner, avoiding the artifacts and bottlenecks introduced by VAEs in two-stage latent diffusion. However, it is challenging to optimize high-dimensional pixel manifolds that contain many perceptually irrelevant signals, leaving existing pixel diffusion methods lagging behind latent diffusion models. We propose PixelGen, a simple pixel diffusion framework with perceptual supervision. Instead of modeling the full image manifold, PixelGen introduces two complementary perceptual losses to guide diffusion model towards learning a more meaningful perceptual manifold. An LPIPS loss facilitates learning better local patterns, while a DINO-based perceptual loss strengthens global semantics. With perceptual supervision, PixelGen surpasses strong latent diffusion baselines. It achieves an FID of 5.11 on ImageNet-256 without classifier-free guidance using only 80 training epochs, and demonstrates favorable scaling performance on large-scale text-to-image generation with a GenEval score of 0.79. PixelGen requires no VAEs, no latent representations, and no auxiliary stages, providing a simpler yet more powerful generative paradigm. Codes are publicly available at https://github.com/Zehong-Ma/PixelGen.