Elucidating the representation of images within an unconditional diffusion model denoiser

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the internal image representation mechanisms of unconditional diffusion model denoisers (UNets) trained on ImageNet. We observe that intermediate layers activate sparse subsets of channels, and construct nonlinear, interpretable image representations via channel-wise spatial averaging. We establish, for the first time, that networks trained solely with a denoising objective inherently encode rich, accessible sparse semantic representations. To formalize this, we propose a stochastic latent-space reconstruction algorithm that quantitatively links Euclidean distances in the representation space to conditional density divergence and semantic similarity. Experiments demonstrate that this distance metric accurately reflects semantic proximity between images; clustering in this space simultaneously captures fine-grained textures, local objects, and global structure—substantially outperforming conventional class-aligned representations in semantic fidelity and granularity.

Technology Category

Application Category

📝 Abstract
Generative diffusion models learn probability densities over diverse image datasets by estimating the score with a neural network trained to remove noise. Despite their remarkable success in generating high-quality images, the internal mechanisms of the underlying score networks are not well understood. Here, we examine a UNet trained for denoising on the ImageNet dataset, to better understand its internal representation and computation of the score. We show that the middle block of the UNet decomposes individual images into sparse subsets of active channels, and that the vector of spatial averages of these channels can provide a nonlinear representation of the underlying clean images. We develop a novel algorithm for stochastic reconstruction of images from this representation and demonstrate that it recovers a sample from a set of images defined by a target image representation. We then study the properties of the representation and demonstrate that Euclidean distances in the latent space correspond to distances between conditional densities induced by representations as well as semantic similarities in the image space. Applying a clustering algorithm in the representation space yields groups of images that share both fine details (e.g., specialized features, textured regions, small objects), as well as global structure, but are only partially aligned with object identities. Thus, we show for the first time that a network trained solely on denoising contains a rich and accessible sparse representation of images.
Problem

Research questions and friction points this paper is trying to address.

Understanding internal mechanisms of diffusion model denoisers
Developing algorithm for image reconstruction from sparse representations
Analyzing semantic similarities via latent space distances
Innovation

Methods, ideas, or system contributions that make the work stand out.

UNet decomposes images into sparse active channels
Novel algorithm for stochastic image reconstruction
Latent space distances reflect semantic similarities
🔎 Similar Papers
No similar papers found.
Zahra Kadkhodaie
Zahra Kadkhodaie
Flatiron institute, Simons foundation
Image priorsGenerative modelsImage ProcessingScience of Deep Learning
S
St'ephane Mallat
Collège de France, Flatiron Institute, Simons Foundation
E
E. Simoncelli
New York University, Flatiron Institute, Simons Foundation