🤖 AI Summary
This work addresses the mode collapse issue in pre-trained diffusion models, where identical text prompts often yield visually similar outputs. To enhance generation diversity without altering model architecture or requiring retraining, the authors propose a noise optimization-based post-processing method. By introducing a frequency-aware noise initialization strategy and a tailored optimization objective, the approach effectively preserves high-fidelity image quality while significantly increasing output variability. Experimental results demonstrate that this method outperforms existing techniques based on candidate selection or guidance mechanisms, successfully restoring and amplifying the inherent richness of generations from off-the-shelf diffusion models.
📝 Abstract
Contemporary text-to-image models exhibit a surprising degree of mode collapse, as can be seen when sampling several images given the same text prompt. While previous work has attempted to address this issue by steering the model using guidance mechanisms, or by generating a large pool of candidates and refining them, in this work we take a different direction and aim for diversity in generations via noise optimization. Specifically, we show that a simple noise optimization objective can mitigate mode collapse while preserving the fidelity of the base model. We also analyze the frequency characteristics of the noise and show that alternative noise initializations with different frequency profiles can improve both optimization and search. Our experiments demonstrate that noise optimization yields superior results in terms of generation quality and variety.