🤖 AI Summary
This work addresses the vulnerability of current text-to-image (T2I) models to adversarial prompts that evade safety mechanisms at cross-lingual and fine-grained semantic levels. To this end, the authors propose MacPrompt, a black-box, cross-lingual adversarial attack method that introduces a novel macaronic prompting strategy based on character-level cross-lingual recombination. This approach constructs prompts that maintain high semantic similarity (up to 0.96) while effectively bypassing mainstream safety filters and concept erasure defenses. Notably, MacPrompt requires no internal model knowledge and achieves attack success rates of 92% and 90% on sexually explicit and violent content, respectively, with up to 100% jailbreak success—significantly outperforming conventional synonym substitution or prompt obfuscation techniques.
📝 Abstract
Text-to-image (T2I) models have raised increasing safety concerns due to their capacity to generate NSFW and other banned objects. To mitigate these risks, safety filters and concept removal techniques have been introduced to block inappropriate prompts or erase sensitive concepts from the models. However, all the existing defense methods are not well prepared to handle diverse adversarial prompts. In this work, we introduce MacPrompt, a novel black-box and cross-lingual attack that reveals previously overlooked vulnerabilities in T2I safety mechanisms. Unlike existing attacks that rely on synonym substitution or prompt obfuscation, MacPrompt constructs macaronic adversarial prompts by performing cross-lingual character-level recombination of harmful terms, enabling fine-grained control over both semantics and appearance. By leveraging this design, MacPrompt crafts prompts with high semantic similarity to the original harmful inputs (up to 0.96) while bypassing major safety filters (up to 100%). More critically, it achieves attack success rates as high as 92% for sex-related content and 90% for violence, effectively breaking even state-of-the-art concept removal defenses. These results underscore the pressing need to reassess the robustness of existing T2I safety mechanisms against linguistically diverse and fine-grained adversarial strategies.