🤖 AI Summary
This work addresses the lack of unified, interpretable evaluation criteria for jailbreaking attacks against large language models (LLMs). We propose an LLM-agnostic, non-parametric threat assessment framework. Methodologically, we construct an N-gram language model from 1 trillion tokens of real-world text to quantify the natural-language plausibility of jailbreaking prompts; discrete optimization is then employed for attack adaptation and attribution analysis. Key contributions include: (i) the first fair, reproducible jailbreaking benchmark; (ii) empirical evidence that high-success-rate attacks commonly rely on low-frequency or anomalous bigrams (e.g., Reddit excerpts or code snippets), leading existing evaluations to substantially overestimate attack efficacy; (iii) demonstration that discrete-optimization-based attacks significantly outperform LLM-based approaches; and (iv) validation of the framework’s generalizability and interpretability across multiple safety-aligned LLMs.
📝 Abstract
A plethora of jailbreaking attacks have been proposed to obtain harmful responses from safety-tuned LLMs. These methods largely succeed in coercing the target output in their original settings, but their attacks vary substantially in fluency and computational effort. In this work, we propose a unified threat model for the principled comparison of these methods. Our threat model checks if a given jailbreak is likely to occur in the distribution of text. For this, we build an N-gram language model on 1T tokens, which, unlike model-based perplexity, allows for an LLM-agnostic, nonparametric, and inherently interpretable evaluation. We adapt popular attacks to this threat model, and, for the first time, benchmark these attacks on equal footing with it. After an extensive comparison, we find attack success rates against safety-tuned modern models to be lower than previously presented and that attacks based on discrete optimization significantly outperform recent LLM-based attacks. Being inherently interpretable, our threat model allows for a comprehensive analysis and comparison of jailbreak attacks. We find that effective attacks exploit and abuse infrequent bigrams, either selecting the ones absent from real-world text or rare ones, e.g., specific to Reddit or code datasets.