🤖 AI Summary
Existing optimization-based adversarial attack methods rely on manually defined “positive responses” as optimization objectives, overlooking the intrinsic distributional nature of large language model (LLM) outputs—leading to low attack success rates and inflated robustness estimates. To address this, we propose an adaptive semantic optimization framework grounded in the REINFORCE policy gradient algorithm. Our approach introduces, for the first time, a differentiable semantic objective function explicitly designed to model and optimize the *distribution* of model responses—thereby departing from conventional point-wise response optimization paradigms. Crucially, the framework enables model-adaptive response generation without requiring pre-specified response templates. Experiments on Llama-3 demonstrate that our method doubles the attack success rate (ASR); under the Circuit Breaker defense, ASR improves dramatically from 2% to 50%. These results substantiate both enhanced attack efficacy and more realistic robustness evaluation.
📝 Abstract
To circumvent the alignment of large language models (LLMs), current optimization-based adversarial attacks usually craft adversarial prompts by maximizing the likelihood of a so-called affirmative response. An affirmative response is a manually designed start of a harmful answer to an inappropriate request. While it is often easy to craft prompts that yield a substantial likelihood for the affirmative response, the attacked model frequently does not complete the response in a harmful manner. Moreover, the affirmative objective is usually not adapted to model-specific preferences and essentially ignores the fact that LLMs output a distribution over responses. If low attack success under such an objective is taken as a measure of robustness, the true robustness might be grossly overestimated. To alleviate these flaws, we propose an adaptive and semantic optimization problem over the population of responses. We derive a generally applicable objective via the REINFORCE policy-gradient formalism and demonstrate its efficacy with the state-of-the-art jailbreak algorithms Greedy Coordinate Gradient (GCG) and Projected Gradient Descent (PGD). For example, our objective doubles the attack success rate (ASR) on Llama3 and increases the ASR from 2% to 50% with circuit breaker defense.