🤖 AI Summary
This work identifies a fundamental vulnerability in aligned large language models (LLMs): their ethical decision boundaries are fragile under out-of-distribution (OOD) conditions and highly susceptible to jailbreaking attacks. To address this, we propose ObscurePrompt—a general black-box jailbreaking framework that requires no white-box access or fixed templates. It iteratively applies text-based fuzzing perturbations to LLM-generated prompts, explicitly modeling and crossing ethical decision boundaries through multi-round optimization and adversarial robustness enhancement. Experiments demonstrate that ObscurePrompt achieves significantly higher attack success rates than state-of-the-art methods across mainstream aligned models, while remaining robust against prominent defenses—including RLHF combined with rejection sampling and Constitutional AI. Crucially, this is the first systematic study to expose boundary instability in aligned models under OOD settings, establishing a new paradigm for evaluating and improving robust alignment.
📝 Abstract
Recently, Large Language Models (LLMs) have garnered significant attention for their exceptional natural language processing capabilities. However, concerns about their trustworthiness remain unresolved, particularly in addressing ``jailbreaking'' attacks on aligned LLMs. Previous research predominantly relies on scenarios involving white-box LLMs or specific, fixed prompt templates, which are often impractical and lack broad applicability. In this paper, we introduce a straightforward and novel method called ObscurePrompt for jailbreaking LLMs, inspired by the observed fragile alignments in Out-of-Distribution (OOD) data. Specifically, we first formulate the decision boundary in the jailbreaking process and then explore how obscure text affects LLM's ethical decision boundary. ObscurePrompt starts with constructing a base prompt that integrates well-known jailbreaking techniques. Powerful LLMs are then utilized to obscure the original prompt through iterative transformations, aiming to bolster the attack's robustness. Comprehensive experiments show that our approach substantially improves upon previous methods in terms of attack effectiveness, maintaining efficacy against two prevalent defense mechanisms.