PROMPTMINER: Black-Box Prompt Stealing against Text-to-Image Generative Models via Reinforcement Learning and Fuzz Optimization

📅 2025-11-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address security and intellectual property risks posed by high-value prompts in text-to-image (T2I) models, this paper introduces the first fully black-box prompt stealing framework. Our method requires neither gradient access to the target model nor labeled data, and decomposes prompt inversion into two stages: subject reconstruction and style modifier recovery. Subject reconstruction employs reinforcement learning to optimize semantic consistency with generated images, while style modifier recovery leverages a fuzzy search strategy to efficiently explore the combinatorial space of descriptive tokens. The framework demonstrates strong generalization and robustness against defensive mechanisms. Evaluated across multiple T2I models and datasets, it achieves a CLIP similarity score of 0.958 and SBERT textual alignment of 0.751. In real-world image-based attacks, it outperforms the best prior baseline by 7.5%, significantly advancing the practicality and robustness of black-box prompt inversion.

Technology Category

Application Category

📝 Abstract
Text-to-image (T2I) generative models such as Stable Diffusion and FLUX can synthesize realistic, high-quality images directly from textual prompts. The resulting image quality depends critically on well-crafted prompts that specify both subjects and stylistic modifiers, which have become valuable digital assets. However, the rising value and ubiquity of high-quality prompts expose them to security and intellectual-property risks. One key threat is the prompt stealing attack, i.e., the task of recovering the textual prompt that generated a given image. Prompt stealing enables unauthorized extraction and reuse of carefully engineered prompts, yet it can also support beneficial applications such as data attribution, model provenance analysis, and watermarking validation. Existing approaches often assume white-box gradient access, require large-scale labeled datasets for supervised training, or rely solely on captioning without explicit optimization, limiting their practicality and adaptability. To address these challenges, we propose PROMPTMINER, a black-box prompt stealing framework that decouples the task into two phases: (1) a reinforcement learning-based optimization phase to reconstruct the primary subject, and (2) a fuzzing-driven search phase to recover stylistic modifiers. Experiments across multiple datasets and diffusion backbones demonstrate that PROMPTMINER achieves superior results, with CLIP similarity up to 0.958 and textual alignment with SBERT up to 0.751, surpassing all baselines. Even when applied to in-the-wild images with unknown generators, it outperforms the strongest baseline by 7.5 percent in CLIP similarity, demonstrating better generalization. Finally, PROMPTMINER maintains strong performance under defensive perturbations, highlighting remarkable robustness. Code: https://github.com/aaFrostnova/PromptMiner
Problem

Research questions and friction points this paper is trying to address.

Develops a black-box method to steal prompts from text-to-image models
Recovers both subject and style modifiers via RL and fuzzing optimization
Enhances practicality and robustness without white-box access or labeled data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Black-box framework decouples subject reconstruction and modifier recovery
Uses reinforcement learning to optimize primary subject reconstruction
Employs fuzzing-driven search to recover stylistic modifiers
🔎 Similar Papers
No similar papers found.
M
Mingzhe Li
University of Massachusetts, Amherst
R
Renhao Zhang
University of Massachusetts, Amherst
Z
Zhiyang Wen
University of Massachusetts, Amherst
S
Siqi Pan
Dolby Laboratories
Bruno Castro da Silva
Bruno Castro da Silva
University of Massachusetts
artificial intelligencemachine learningreinforcement learning
Juan Zhai
Juan Zhai
University of Massachusetts, Amherst
software text analyticssoftware reliabilitydeep learning
Shiqing Ma
Shiqing Ma
University of Massachusetts, Amherst
SecurityAISE