MacPrompt: Maraconic-guided Jailbreak against Text-to-Image Models

📅 2026-01-12
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of current text-to-image (T2I) models to adversarial prompts that evade safety mechanisms at cross-lingual and fine-grained semantic levels. To this end, the authors propose MacPrompt, a black-box, cross-lingual adversarial attack method that introduces a novel macaronic prompting strategy based on character-level cross-lingual recombination. This approach constructs prompts that maintain high semantic similarity (up to 0.96) while effectively bypassing mainstream safety filters and concept erasure defenses. Notably, MacPrompt requires no internal model knowledge and achieves attack success rates of 92% and 90% on sexually explicit and violent content, respectively, with up to 100% jailbreak success—significantly outperforming conventional synonym substitution or prompt obfuscation techniques.

Technology Category

Application Category

📝 Abstract
Text-to-image (T2I) models have raised increasing safety concerns due to their capacity to generate NSFW and other banned objects. To mitigate these risks, safety filters and concept removal techniques have been introduced to block inappropriate prompts or erase sensitive concepts from the models. However, all the existing defense methods are not well prepared to handle diverse adversarial prompts. In this work, we introduce MacPrompt, a novel black-box and cross-lingual attack that reveals previously overlooked vulnerabilities in T2I safety mechanisms. Unlike existing attacks that rely on synonym substitution or prompt obfuscation, MacPrompt constructs macaronic adversarial prompts by performing cross-lingual character-level recombination of harmful terms, enabling fine-grained control over both semantics and appearance. By leveraging this design, MacPrompt crafts prompts with high semantic similarity to the original harmful inputs (up to 0.96) while bypassing major safety filters (up to 100%). More critically, it achieves attack success rates as high as 92% for sex-related content and 90% for violence, effectively breaking even state-of-the-art concept removal defenses. These results underscore the pressing need to reassess the robustness of existing T2I safety mechanisms against linguistically diverse and fine-grained adversarial strategies.
Problem

Research questions and friction points this paper is trying to address.

text-to-image models
safety mechanisms
adversarial prompts
cross-lingual attack
content moderation
Innovation

Methods, ideas, or system contributions that make the work stand out.

macaronic prompting
cross-lingual attack
text-to-image safety
adversarial prompt
concept removal bypass
🔎 Similar Papers
No similar papers found.
X
Xi Ye
Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, China
Yiwen Liu
Yiwen Liu
Technical University of Munich
Computer VisionRobotics VisionMultimodal Learning
Lina Wang
Lina Wang
Professor, Wuhan University
Computer Security
Run Wang
Run Wang
Integrated Systems Laboratory (IIS), ETHz
Hardware/Software Co-designTinyML
G
Geying Yang
School of Cyber Science and Engineering, Tianjin University, China
Y
Yufei Hou
Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, China
J
Jiayi Yu
Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, China