🤖 AI Summary
Existing jailbreaking methods rely on white-box access, handcrafted templates, or inefficient search strategies, struggling to balance generality and efficiency. This paper proposes ECLIPSE—a highly efficient black-box jailbreaking framework that uniquely employs the target large language model (LLM) itself as an optimizer. It autonomously generates and iteratively refines adversarial suffixes via natural-language instructions, requiring no gradient information, predefined phrases, or human intervention. Its core innovations are: (1) an LLM-driven, self-supervised suffix optimization paradigm; and (2) a reinforcement feedback mechanism grounded in harm score estimation, enabling model introspection and rapid convergence. Evaluated on five mainstream models—including GPT-3.5-Turbo—ECLIPSE achieves a 92% average attack success rate, outperforming GCG by 2.4× in effectiveness and 83% in efficiency, while matching the performance of state-of-the-art template-based methods.
📝 Abstract
Despite prior safety alignment efforts, mainstream LLMs can still generate harmful and unethical content when subjected to jailbreaking attacks. Existing jailbreaking methods fall into two main categories: template-based and optimization-based methods. The former requires significant manual effort and domain knowledge, while the latter, exemplified by Greedy Coordinate Gradient (GCG), which seeks to maximize the likelihood of harmful LLM outputs through token-level optimization, also encounters several limitations: requiring white-box access, necessitating pre-constructed affirmative phrase, and suffering from low efficiency. In this paper, we present ECLIPSE, a novel and efficient black-box jailbreaking method utilizing optimizable suffixes. Drawing inspiration from LLMs' powerful generation and optimization capabilities, we employ task prompts to translate jailbreaking goals into natural language instructions. This guides the LLM to generate adversarial suffixes for malicious queries. In particular, a harmfulness scorer provides continuous feedback, enabling LLM self-reflection and iterative optimization to autonomously and efficiently produce effective suffixes. Experimental results demonstrate that ECLIPSE achieves an average attack success rate (ASR) of 0.92 across three open-source LLMs and GPT-3.5-Turbo, significantly surpassing GCG in 2.4 times. Moreover, ECLIPSE is on par with template-based methods in ASR while offering superior attack efficiency, reducing the average attack overhead by 83%.