🤖 AI Summary
This study reveals severe prompt-level jailbreaking risks for large language models (LLMs) and text-to-image (T2I) systems in real-world settings: non-expert users can efficiently circumvent mainstream API safety mechanisms using low-barrier prompting techniques—such as multi-turn narrative escalation, lexical obfuscation, and semantic implication. Method: The authors propose the first unified taxonomy of prompt-level jailbreaking strategies applicable to both text generation and T2I models, grounded in a systematic empirical framework incorporating role-playing, implicit logical chain construction, and dynamic lexical substitution. Experiments are conducted directly on production API endpoints. Contribution/Results: The attacks demonstrate high reproducibility and success rates across diverse models and platforms. Results show that current input filtering and output moderation mechanisms are largely ineffective against lightweight, everyday adversarial prompts—exposing critical latency and systemic fragility in existing safety architectures.
📝 Abstract
Despite significant advancements in alignment and content moderation, large language models (LLMs) and text-to-image (T2I) systems remain vulnerable to prompt-based attacks known as jailbreaks. Unlike traditional adversarial examples requiring expert knowledge, many of today's jailbreaks are low-effort, high-impact crafted by everyday users with nothing more than cleverly worded prompts. This paper presents a systems-style investigation into how non-experts reliably circumvent safety mechanisms through techniques such as multi-turn narrative escalation, lexical camouflage, implication chaining, fictional impersonation, and subtle semantic edits. We propose a unified taxonomy of prompt-level jailbreak strategies spanning both text-output and T2I models, grounded in empirical case studies across popular APIs. Our analysis reveals that every stage of the moderation pipeline, from input filtering to output validation, can be bypassed with accessible strategies. We conclude by highlighting the urgent need for context-aware defenses that reflect the ease with which these jailbreaks can be reproduced in real-world settings.