LatentBreak: Jailbreaking Large Language Models through Latent Space Feedback

📅 2025-10-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses perplexity-based safety defenses in large language models (LLMs) by proposing a white-box adversarial attack to generate natural and stealthy jailbreaking prompts. The method operates in the latent space, performing semantic-equivalent word substitutions that preserve the original intent while substantially reducing output perplexity; concurrently, it minimizes the representation-space distance between the adversarial prompt and legitimate queries, avoiding high-perplexity suffixes or verbose templates. Unlike prior approaches relying on black-box optimization or heuristic templates, our method yields shorter, more natural prompts. It achieves higher attack success rates and stronger stealth across multiple safety-aligned LLMs. By eliminating reliance on external query feedback or handcrafted patterns, the approach offers a principled framework for evaluating and strengthening perplexity-driven safety mechanisms—providing both a novel vulnerability assessment tool and insights for robust defense design.

Technology Category

Application Category

📝 Abstract
Jailbreaks are adversarial attacks designed to bypass the built-in safety mechanisms of large language models. Automated jailbreaks typically optimize an adversarial suffix or adapt long prompt templates by forcing the model to generate the initial part of a restricted or harmful response. In this work, we show that existing jailbreak attacks that leverage such mechanisms to unlock the model response can be detected by a straightforward perplexity-based filtering on the input prompt. To overcome this issue, we propose LatentBreak, a white-box jailbreak attack that generates natural adversarial prompts with low perplexity capable of evading such defenses. LatentBreak substitutes words in the input prompt with semantically-equivalent ones, preserving the initial intent of the prompt, instead of adding high-perplexity adversarial suffixes or long templates. These words are chosen by minimizing the distance in the latent space between the representation of the adversarial prompt and that of harmless requests. Our extensive evaluation shows that LatentBreak leads to shorter and low-perplexity prompts, thus outperforming competing jailbreak algorithms against perplexity-based filters on multiple safety-aligned models.
Problem

Research questions and friction points this paper is trying to address.

Overcoming perplexity-based detection of adversarial prompts in LLMs
Generating natural jailbreak prompts with low perplexity scores
Replacing words with semantic equivalents to evade safety filters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Substitutes words with semantically-equivalent alternatives
Minimizes latent space distance to harmless requests
Generates low-perplexity natural adversarial prompts
🔎 Similar Papers
No similar papers found.