🤖 AI Summary
Addressing the challenges of manually crafting high-quality prompts for text-to-image generation—namely, labor-intensive prompt engineering, poor interpretability, and semantic incoherence in existing prompt inversion methods—this paper proposes a gradient-free, vision-guided hard prompt inversion framework. Our method synergistically leverages the controllable text generation capability of large language models (LLMs) and CLIP’s cross-modal semantic evaluation. It iteratively refines discrete (hard) prompts via sampling-based autoregressive decoding, guided solely by CLIP similarity feedback—requiring no training, parameter updates, or differentiable optimization. Compared to both soft and hard prompt inversion baselines, our approach achieves significant improvements across multiple benchmarks in semantic accuracy, grammatical coherence, and human readability of generated prompts. Moreover, it demonstrates strong generalization and fine-grained controllability. This work establishes an efficient, transparent, and human-in-the-loop paradigm for controllable image generation.
📝 Abstract
Text-to-image generative models like DALL-E and Stable Diffusion have revolutionized visual content creation across various applications, including advertising, personalized media, and design prototyping. However, crafting effective textual prompts to guide these models remains challenging, often requiring extensive trial and error. Existing prompt inversion approaches, such as soft and hard prompt techniques, are not so effective due to the limited interpretability and incoherent prompt generation. To address these issues, we propose Visually Guided Decoding (VGD), a gradient-free approach that leverages large language models (LLMs) and CLIP-based guidance to generate coherent and semantically aligned prompts. In essence, VGD utilizes the robust text generation capabilities of LLMs to produce human-readable prompts. Further, by employing CLIP scores to ensure alignment with user-specified visual concepts, VGD enhances the interpretability, generalization, and flexibility of prompt generation without the need for additional training. Our experiments demonstrate that VGD outperforms existing prompt inversion techniques in generating understandable and contextually relevant prompts, facilitating more intuitive and controllable interactions with text-to-image models.