🤖 AI Summary
Text-to-image diffusion models generate photorealistic images but struggle to produce genuinely novel visual concepts beyond prompt constraints. Existing creativity-enhancement methods are limited by predefined categories (e.g., feature interpolation) or prohibitive computational overhead (e.g., embedding optimization or fine-tuning). This paper proposes a training-free, inference-time adaptive negative prompting mechanism: leveraging a vision-language model (VLM), it dynamically analyzes intermediate CLIP embeddings during diffusion to detect and suppress conventional visual patterns, thereby steering the model toward novel compositions and multi-object arrangements. Our approach is the first to extend creative guidance to complex scene generation while preserving high prompt fidelity. It achieves significant gains in novelty—measured via human evaluation and diversity metrics—without compromising image quality. Crucially, it incurs negligible computational overhead and integrates seamlessly into standard diffusion pipelines without architectural modification.
📝 Abstract
Creative generation is the synthesis of new, surprising, and valuable samples that reflect user intent yet cannot be envisioned in advance. This task aims to extend human imagination, enabling the discovery of visual concepts that exist in the unexplored spaces between familiar domains. While text-to-image diffusion models excel at rendering photorealistic scenes that faithfully match user prompts, they still struggle to generate genuinely novel content. Existing approaches to enhance generative creativity either rely on interpolation of image features, which restricts exploration to predefined categories, or require time-intensive procedures such as embedding optimization or model fine-tuning. We propose VLM-Guided Adaptive Negative-Prompting, a training-free, inference-time method that promotes creative image generation while preserving the validity of the generated object. Our approach utilizes a vision-language model (VLM) that analyzes intermediate outputs of the generation process and adaptively steers it away from conventional visual concepts, encouraging the emergence of novel and surprising outputs. We evaluate creativity through both novelty and validity, using statistical metrics in the CLIP embedding space. Through extensive experiments, we show consistent gains in creative novelty with negligible computational overhead. Moreover, unlike existing methods that primarily generate single objects, our approach extends to complex scenarios, such as generating coherent sets of creative objects and preserving creativity within elaborate compositional prompts. Our method integrates seamlessly into existing diffusion pipelines, offering a practical route to producing creative outputs that venture beyond the constraints of textual descriptions.