🤖 AI Summary
Stable Diffusion–based models excel in text–image alignment and visual fidelity but exhibit limited creative generation capability—particularly when responding to abstract prompts such as “creative.” This paper proposes C3 (Creative Concept Catalyst), a training-free method that enhances originality during the denoising process via three mechanisms: gradient-guided feature-space steering, implicit layer-selective modulation, and prompt-agnostic semantic enhancement. Our key contributions are twofold: (i) the first zero-training approach to *directly* augment creativity in diffusion models; and (ii) a dynamic amplification factor selection criterion jointly optimized for originality and appropriateness. Evaluated across multiple Stable Diffusion variants, C3 achieves an average 37% improvement in creativity scores—measured by human and automated metrics—while preserving text–image alignment and visual fidelity.
📝 Abstract
Recent text-to-image generative models, particularly Stable Diffusion and its distilled variants, have achieved impressive fidelity and strong text-image alignment. However, their creative capability remains constrained, as including `creative' in prompts seldom yields the desired results. This paper introduces C3 (Creative Concept Catalyst), a training-free approach designed to enhance creativity in Stable Diffusion-based models. C3 selectively amplifies features during the denoising process to foster more creative outputs. We offer practical guidelines for choosing amplification factors based on two main aspects of creativity. C3 is the first study to enhance creativity in diffusion models without extensive computational costs. We demonstrate its effectiveness across various Stable Diffusion-based models.