Stealix: Model Stealing via Prompt Evolution

📅 2025-06-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Model stealing poses a realistic threat: existing data synthesis methods based on pre-trained diffusion models rely on manually engineered prompts, limiting accessibility for low-skill attackers. This paper proposes the first fully prompt-free black-box model stealing approach—operating without knowledge of target class names or prompt engineering expertise. Leveraging two open-source diffusion models (e.g., Stable Diffusion), our method jointly infers the target model’s data distribution and employs a query-driven genetic algorithm to automatically evolve high-fidelity, diverse prompts. Key contributions include: (1) a more realistic threat model for low-skill adversaries; (2) the first automated evolutionary mechanism over prompt space; and (3) superior classification accuracy and diversity of synthesized images under identical query budgets, outperforming baselines requiring class names or handcrafted prompts—demonstrating that the theft risk posed by pre-trained generative models has been systematically underestimated.

Technology Category

Application Category

📝 Abstract
Model stealing poses a significant security risk in machine learning by enabling attackers to replicate a black-box model without access to its training data, thus jeopardizing intellectual property and exposing sensitive information. Recent methods that use pre-trained diffusion models for data synthesis improve efficiency and performance but rely heavily on manually crafted prompts, limiting automation and scalability, especially for attackers with little expertise. To assess the risks posed by open-source pre-trained models, we propose a more realistic threat model that eliminates the need for prompt design skills or knowledge of class names. In this context, we introduce Stealix, the first approach to perform model stealing without predefined prompts. Stealix uses two open-source pre-trained models to infer the victim model's data distribution, and iteratively refines prompts through a genetic algorithm, progressively improving the precision and diversity of synthetic images. Our experimental results demonstrate that Stealix significantly outperforms other methods, even those with access to class names or fine-grained prompts, while operating under the same query budget. These findings highlight the scalability of our approach and suggest that the risks posed by pre-trained generative models in model stealing may be greater than previously recognized.
Problem

Research questions and friction points this paper is trying to address.

Model stealing risks intellectual property without training data access
Current methods rely on manual prompts, limiting automation and scalability
Stealix automates prompt evolution to steal models without predefined prompts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses genetic algorithm for prompt refinement
Leverages pre-trained models for data inference
Eliminates need for predefined prompts
🔎 Similar Papers
No similar papers found.