Good Seed Makes a Good Crop: Discovering Secret Seeds in Text-to-Image Diffusion Models

📅 2024-05-23
🏛️ IEEE Workshop/Winter Conference on Applications of Computer Vision
📈 Citations: 10
Influential: 2
📄 PDF
🤖 AI Summary
This work systematically investigates the impact of random seeds on the interpretability of text-to-image diffusion models. To address the underexplored role of seeds beyond stochastic initialization, we conduct large-scale sampling experiments, FID-based quality evaluation, seed classification training, attribution map visualization, and inpainting-based validation. Our analysis reveals that seeds not only substantially modulate image fidelity (FID degrades from 21.60 to 31.97), color distribution, composition, and spatial layout, but also consistently encode semantic-level visual features. We identify high-fidelity “golden seeds” for the first time and demonstrate their exceptional discriminability—achieving >99.9% classification accuracy. These findings establish a reproducible seed-selection strategy for controllable image generation and uncover a structured, semantically meaningful role of randomness in diffusion modeling, challenging the conventional view of seeds as mere noise sources.

Technology Category

Application Category

📝 Abstract
Recent advances in text-to-image (T2I) diffusion models have facilitated creative and photorealistic image synthesis. By varying the random seeds, we can generate many images for a fixed text prompt. Technically, the seed controls the initial noise and, in multi-step diffusion inference, the noise used for reparameterization at intermediate timesteps in the reverse diffusion process. However, the specific impact of the random seed on the generated images remains relatively unexplored. In this work, we conduct a large-scale scientific study into the impact of random seeds during diffusion inference. Remarkably, we reveal that the best ‘golden’ seed achieved an impressive FID of 21.60, compared to the worst ‘inferior’ seed's FID of 31.97. Additionally, a classifier can predict the seed number used to generate an image with over 99.9% accuracy in just a few epochs, establishing that seeds are highly distinguishable based on generated images. Encouraged by these findings, we examined the influence of seeds on interpretable visual dimensions. We find that certain seeds consistently produce grayscale images, prominent sky regions, or image borders. Seeds also affect image composition, including object location, size, and depth. Moreover, by leveraging these ‘golden’ seeds, we demonstrate improved image generation such as high-fidelity inference and diversified sampling. Our investigation extends to inpainting tasks, where we uncover some seeds that tend to insert unwanted text artifacts. Overall, our extensive analyses highlight the importance of selecting good seeds and offer practical utility for image generation.
Problem

Research questions and friction points this paper is trying to address.

Exploring impact of random seeds on T2I diffusion outputs
Identifying golden seeds for improved image generation quality
Analyzing seed influence on visual attributes and composition
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzing seed impact on diffusion model outputs
Identifying golden seeds for superior image quality
Leveraging seeds to control visual attributes
🔎 Similar Papers
No similar papers found.