🤖 AI Summary
To address security and intellectual property risks posed by high-value prompts in text-to-image (T2I) models, this paper introduces the first fully black-box prompt stealing framework. Our method requires neither gradient access to the target model nor labeled data, and decomposes prompt inversion into two stages: subject reconstruction and style modifier recovery. Subject reconstruction employs reinforcement learning to optimize semantic consistency with generated images, while style modifier recovery leverages a fuzzy search strategy to efficiently explore the combinatorial space of descriptive tokens. The framework demonstrates strong generalization and robustness against defensive mechanisms. Evaluated across multiple T2I models and datasets, it achieves a CLIP similarity score of 0.958 and SBERT textual alignment of 0.751. In real-world image-based attacks, it outperforms the best prior baseline by 7.5%, significantly advancing the practicality and robustness of black-box prompt inversion.
📝 Abstract
Text-to-image (T2I) generative models such as Stable Diffusion and FLUX can synthesize realistic, high-quality images directly from textual prompts. The resulting image quality depends critically on well-crafted prompts that specify both subjects and stylistic modifiers, which have become valuable digital assets. However, the rising value and ubiquity of high-quality prompts expose them to security and intellectual-property risks. One key threat is the prompt stealing attack, i.e., the task of recovering the textual prompt that generated a given image. Prompt stealing enables unauthorized extraction and reuse of carefully engineered prompts, yet it can also support beneficial applications such as data attribution, model provenance analysis, and watermarking validation. Existing approaches often assume white-box gradient access, require large-scale labeled datasets for supervised training, or rely solely on captioning without explicit optimization, limiting their practicality and adaptability. To address these challenges, we propose PROMPTMINER, a black-box prompt stealing framework that decouples the task into two phases: (1) a reinforcement learning-based optimization phase to reconstruct the primary subject, and (2) a fuzzing-driven search phase to recover stylistic modifiers. Experiments across multiple datasets and diffusion backbones demonstrate that PROMPTMINER achieves superior results, with CLIP similarity up to 0.958 and textual alignment with SBERT up to 0.751, surpassing all baselines. Even when applied to in-the-wild images with unknown generators, it outperforms the strongest baseline by 7.5 percent in CLIP similarity, demonstrating better generalization. Finally, PROMPTMINER maintains strong performance under defensive perturbations, highlighting remarkable robustness. Code: https://github.com/aaFrostnova/PromptMiner