๐ค AI Summary
This work addresses the challenge of generating highly creative, novel, and rare images without compromising visual quality. The authors propose a diffusion-based framework that formalizes โcreativityโ as the inverse probability of an imageโs occurrence in CLIP embedding space and actively steers the generative process toward low-probability regions of this space. To preserve visual fidelity while exploring such rare areas, a pullback mechanism is introduced to constrain the generation toward high-quality outputs. Experimental results demonstrate that the method efficiently produces images that are both visually compelling and uniquely distinctive across diverse text-to-image tasks, without relying on manual concept blending or explicit category exclusion.
๐ Abstract
Creative image generation has emerged as a compelling area of research, driven by the need to produce novel and high-quality images that expand the boundaries of imagination. In this work, we propose a novel framework for creative generation using diffusion models, where creativity is associated with the inverse probability of an image's existence in the CLIP embedding space. Unlike prior approaches that rely on a manual blending of concepts or exclusion of subcategories, our method calculates the probability distribution of generated images and drives it towards low-probability regions to produce rare, imaginative, and visually captivating outputs. We also introduce pullback mechanisms, achieving high creativity without sacrificing visual fidelity. Extensive experiments on text-to-image diffusion models demonstrate the effectiveness and efficiency of our creative generation framework, showcasing its ability to produce unique, novel, and thought-provoking images. This work provides a new perspective on creativity in generative models, offering a principled method to foster innovation in visual content synthesis.