🤖 AI Summary
This work addresses perplexity-based safety defenses in large language models (LLMs) by proposing a white-box adversarial attack to generate natural and stealthy jailbreaking prompts. The method operates in the latent space, performing semantic-equivalent word substitutions that preserve the original intent while substantially reducing output perplexity; concurrently, it minimizes the representation-space distance between the adversarial prompt and legitimate queries, avoiding high-perplexity suffixes or verbose templates. Unlike prior approaches relying on black-box optimization or heuristic templates, our method yields shorter, more natural prompts. It achieves higher attack success rates and stronger stealth across multiple safety-aligned LLMs. By eliminating reliance on external query feedback or handcrafted patterns, the approach offers a principled framework for evaluating and strengthening perplexity-driven safety mechanisms—providing both a novel vulnerability assessment tool and insights for robust defense design.
📝 Abstract
Jailbreaks are adversarial attacks designed to bypass the built-in safety mechanisms of large language models. Automated jailbreaks typically optimize an adversarial suffix or adapt long prompt templates by forcing the model to generate the initial part of a restricted or harmful response. In this work, we show that existing jailbreak attacks that leverage such mechanisms to unlock the model response can be detected by a straightforward perplexity-based filtering on the input prompt. To overcome this issue, we propose LatentBreak, a white-box jailbreak attack that generates natural adversarial prompts with low perplexity capable of evading such defenses. LatentBreak substitutes words in the input prompt with semantically-equivalent ones, preserving the initial intent of the prompt, instead of adding high-perplexity adversarial suffixes or long templates. These words are chosen by minimizing the distance in the latent space between the representation of the adversarial prompt and that of harmless requests. Our extensive evaluation shows that LatentBreak leads to shorter and low-perplexity prompts, thus outperforming competing jailbreak algorithms against perplexity-based filters on multiple safety-aligned models.