π€ AI Summary
To address the poor generalizability and weak transferability of jailbreaking attacks against robust large language models (e.g., GPT-4 and Claude-3 with defensive suffixes), this paper proposes ArrAttackβthe first cross-model, cross-defense jailbreaking prompt generation method grounded in a universal robustness discrimination model. ArrAttack integrates robustness modeling, differentiable prompt optimization, and transfer learning to enable strong black-box and white-box attacks against diverse defense mechanisms. Compared to prior approaches, ArrAttack achieves an average 23.6% improvement in attack success rate across seven mainstream defensive models, demonstrating superior generalization and practicality. The complete codebase and evaluation framework are publicly released.
π Abstract
Safety alignment in large language models (LLMs) is increasingly compromised by jailbreak attacks, which can manipulate these models to generate harmful or unintended content. Investigating these attacks is crucial for uncovering model vulnerabilities. However, many existing jailbreak strategies fail to keep pace with the rapid development of defense mechanisms, such as defensive suffixes, rendering them ineffective against defended models. To tackle this issue, we introduce a novel attack method called ArrAttack, specifically designed to target defended LLMs. ArrAttack automatically generates robust jailbreak prompts capable of bypassing various defense measures. This capability is supported by a universal robustness judgment model that, once trained, can perform robustness evaluation for any target model with a wide variety of defenses. By leveraging this model, we can rapidly develop a robust jailbreak prompt generator that efficiently converts malicious input prompts into effective attacks. Extensive evaluations reveal that ArrAttack significantly outperforms existing attack strategies, demonstrating strong transferability across both white-box and black-box models, including GPT-4 and Claude-3. Our work bridges the gap between jailbreak attacks and defenses, providing a fresh perspective on generating robust jailbreak prompts. We make the codebase available at https://github.com/LLBao/ArrAttack.