REINFORCE Adversarial Attacks on Large Language Models: An Adaptive, Distributional, and Semantic Objective

📅 2025-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing optimization-based adversarial attack methods rely on manually defined “positive responses” as optimization objectives, overlooking the intrinsic distributional nature of large language model (LLM) outputs—leading to low attack success rates and inflated robustness estimates. To address this, we propose an adaptive semantic optimization framework grounded in the REINFORCE policy gradient algorithm. Our approach introduces, for the first time, a differentiable semantic objective function explicitly designed to model and optimize the *distribution* of model responses—thereby departing from conventional point-wise response optimization paradigms. Crucially, the framework enables model-adaptive response generation without requiring pre-specified response templates. Experiments on Llama-3 demonstrate that our method doubles the attack success rate (ASR); under the Circuit Breaker defense, ASR improves dramatically from 2% to 50%. These results substantiate both enhanced attack efficacy and more realistic robustness evaluation.

Technology Category

Application Category

📝 Abstract
To circumvent the alignment of large language models (LLMs), current optimization-based adversarial attacks usually craft adversarial prompts by maximizing the likelihood of a so-called affirmative response. An affirmative response is a manually designed start of a harmful answer to an inappropriate request. While it is often easy to craft prompts that yield a substantial likelihood for the affirmative response, the attacked model frequently does not complete the response in a harmful manner. Moreover, the affirmative objective is usually not adapted to model-specific preferences and essentially ignores the fact that LLMs output a distribution over responses. If low attack success under such an objective is taken as a measure of robustness, the true robustness might be grossly overestimated. To alleviate these flaws, we propose an adaptive and semantic optimization problem over the population of responses. We derive a generally applicable objective via the REINFORCE policy-gradient formalism and demonstrate its efficacy with the state-of-the-art jailbreak algorithms Greedy Coordinate Gradient (GCG) and Projected Gradient Descent (PGD). For example, our objective doubles the attack success rate (ASR) on Llama3 and increases the ASR from 2% to 50% with circuit breaker defense.
Problem

Research questions and friction points this paper is trying to address.

Enhance attack success on large language models.
Adapt adversarial prompts to model-specific behaviors.
Optimize semantic objectives across response distributions.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive semantic optimization
REINFORCE policy-gradient formalism
Enhanced attack success rate
🔎 Similar Papers
Simon Geisler
Simon Geisler
Google Research
Machine LearningDeep Learning on GraphsAdversarial RobustnessUncertainty Estimation
T
Tom Wollschlager
Department of Computer Science & Munich Data Science Institute, Technical University of Munich
M
M. H. I. Abdalla
Department of Computer Science & Munich Data Science Institute, Technical University of Munich
Vincent Cohen-Addad
Vincent Cohen-Addad
Google Research
AlgorithmsOptimizationClustering
J
Johannes Gasteiger
Google Research, Now at Anthropic
S
Stephan Gunnemann
Department of Computer Science & Munich Data Science Institute, Technical University of Munich