🤖 AI Summary
Existing LLM jailbreaking attacks suffer from insufficient mutation diversity, shallow evaluation, and vulnerability to keyword-based detection. To address these issues, this paper proposes EvolveJailbreak, an evolutionary adversarial attack framework. Methodologically, it integrates character-, word-, and sentence-level perturbations to generate semantically coherent jailbreaking prompts; employs a pre-trained semantic similarity model to quantify prompt alignment; and introduces an LLM-driven dual-dimension classifier that jointly assesses input compliance and output harmfulness. Our key contribution is the first realization of synergistic optimization between multi-granularity perturbations and interpretable semantic evaluation, significantly enhancing both attack stealthiness and success rate. Experiments on mainstream aligned models—including Llama-3-Instruct and Qwen2.5-7B-Instruct—demonstrate an average jailbreaking success rate of 86.3%, outperforming state-of-the-art methods by 12.7%. Moreover, prompt naturalness (BLEU-4 ≥ 0.72) and false-positive rate (<5.2%) are substantially improved.
📝 Abstract
The rapid adoption of large language models (LLMs) has brought both transformative applications and new security risks, including jailbreak attacks that bypass alignment safeguards to elicit harmful outputs. Existing automated jailbreak generation approaches e.g. AutoDAN, suffer from limited mutation diversity, shallow fitness evaluation, and fragile keyword-based detection. To address these limitations, we propose ForgeDAN, a novel evolutionary framework for generating semantically coherent and highly effective adversarial prompts against aligned LLMs. First, ForgeDAN introduces multi-strategy textual perturbations across extit{character, word, and sentence-level} operations to enhance attack diversity; then we employ interpretable semantic fitness evaluation based on a text similarity model to guide the evolutionary process toward semantically relevant and harmful outputs; finally, ForgeDAN integrates dual-dimensional jailbreak judgment, leveraging an LLM-based classifier to jointly assess model compliance and output harmfulness, thereby reducing false positives and improving detection effectiveness. Our evaluation demonstrates ForgeDAN achieves high jailbreaking success rates while maintaining naturalness and stealth, outperforming existing SOTA solutions.