🤖 AI Summary
This study investigates the internal image representation mechanisms of unconditional diffusion model denoisers (UNets) trained on ImageNet. We observe that intermediate layers activate sparse subsets of channels, and construct nonlinear, interpretable image representations via channel-wise spatial averaging. We establish, for the first time, that networks trained solely with a denoising objective inherently encode rich, accessible sparse semantic representations. To formalize this, we propose a stochastic latent-space reconstruction algorithm that quantitatively links Euclidean distances in the representation space to conditional density divergence and semantic similarity. Experiments demonstrate that this distance metric accurately reflects semantic proximity between images; clustering in this space simultaneously captures fine-grained textures, local objects, and global structure—substantially outperforming conventional class-aligned representations in semantic fidelity and granularity.
📝 Abstract
Generative diffusion models learn probability densities over diverse image datasets by estimating the score with a neural network trained to remove noise. Despite their remarkable success in generating high-quality images, the internal mechanisms of the underlying score networks are not well understood. Here, we examine a UNet trained for denoising on the ImageNet dataset, to better understand its internal representation and computation of the score. We show that the middle block of the UNet decomposes individual images into sparse subsets of active channels, and that the vector of spatial averages of these channels can provide a nonlinear representation of the underlying clean images. We develop a novel algorithm for stochastic reconstruction of images from this representation and demonstrate that it recovers a sample from a set of images defined by a target image representation. We then study the properties of the representation and demonstrate that Euclidean distances in the latent space correspond to distances between conditional densities induced by representations as well as semantic similarities in the image space. Applying a clustering algorithm in the representation space yields groups of images that share both fine details (e.g., specialized features, textured regions, small objects), as well as global structure, but are only partially aligned with object identities. Thus, we show for the first time that a network trained solely on denoising contains a rich and accessible sparse representation of images.