🤖 AI Summary
Existing evaluation methodologies for language model defenses suffer from critical limitations—relying predominantly on static or low-strength adversarial examples that fail to emulate real-world attackers’ capability to adaptively optimize attacks against specific defense mechanisms.
Method: We propose the first defense-aware, high-strength adaptive attack framework, integrating synergistic optimization strategies including gradient descent, reinforcement learning, random search, and human-in-the-loop exploration.
Contribution/Results: Systematically evaluating 12 state-of-the-art defense methods, our framework successfully bypasses the majority, achieving an average attack success rate exceeding 90%—in stark contrast to near-zero failure rates reported in prior evaluations. This exposes fundamental robustness deficiencies under strong adversarial pressure. Our work challenges the prevailing lenient evaluation paradigm and establishes “defense-aware adaptive attack” as a new gold standard for rigorous robustness assessment. It provides both a methodological foundation and empirical evidence for trustworthy large language model security evaluation.
📝 Abstract
How should we evaluate the robustness of language model defenses? Current defenses against jailbreaks and prompt injections (which aim to prevent an attacker from eliciting harmful knowledge or remotely triggering malicious actions, respectively) are typically evaluated either against a static set of harmful attack strings, or against computationally weak optimization methods that were not designed with the defense in mind. We argue that this evaluation process is flawed.
Instead, we should evaluate defenses against adaptive attackers who explicitly modify their attack strategy to counter a defense's design while spending considerable resources to optimize their objective. By systematically tuning and scaling general optimization techniques-gradient descent, reinforcement learning, random search, and human-guided exploration-we bypass 12 recent defenses (based on a diverse set of techniques) with attack success rate above 90% for most; importantly, the majority of defenses originally reported near-zero attack success rates. We believe that future defense work must consider stronger attacks, such as the ones we describe, in order to make reliable and convincing claims of robustness.