Jailbreaking in the Haystack

📅 2025-11-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a novel security vulnerability in long-context language models (handling million-token inputs): context expansion paradoxically degrades defense against harmful objectives. To exploit this, we propose NINJA—a low-resource, transferable, and stealthy jailbreaking method that embeds position-sensitive, covert malicious instructions within autoregressively generated benign long texts, leveraging the model’s own outputs to obfuscate adversarial intent. A key finding is that the positional placement of harmful targets within long contexts critically impacts model safety. Experiments on HarmBench demonstrate that NINJA substantially increases attack success rates against mainstream models including LLaMA, Qwen, Mistral, and Gemini. Moreover, under fixed computational budgets, extending context length proves more effective than repeated short-context attempts. This study is the first to systematically uncover and empirically validate a fundamental security deficiency inherent to long-context modeling.

Technology Category

Application Category

📝 Abstract
Recent advances in long-context language models (LMs) have enabled million-token inputs, expanding their capabilities across complex tasks like computer-use agents. Yet, the safety implications of these extended contexts remain unclear. To bridge this gap, we introduce NINJA (short for Needle-in-haystack jailbreak attack), a method that jailbreaks aligned LMs by appending benign, model-generated content to harmful user goals. Critical to our method is the observation that the position of harmful goals play an important role in safety. Experiments on standard safety benchmark, HarmBench, show that NINJA significantly increases attack success rates across state-of-the-art open and proprietary models, including LLaMA, Qwen, Mistral, and Gemini. Unlike prior jailbreaking methods, our approach is low-resource, transferable, and less detectable. Moreover, we show that NINJA is compute-optimal -- under a fixed compute budget, increasing context length can outperform increasing the number of trials in best-of-N jailbreak. These findings reveal that even benign long contexts -- when crafted with careful goal positioning -- introduce fundamental vulnerabilities in modern LMs.
Problem

Research questions and friction points this paper is trying to address.

Jailbreaks aligned language models using benign generated content
Explores safety vulnerabilities from harmful goal positioning in long contexts
Demonstrates compute-optimal attacks transferable across state-of-the-art models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Appends benign content to harmful user goals
Leverages goal positioning to bypass safety measures
Uses compute-optimal long-context jailbreak attacks
🔎 Similar Papers
No similar papers found.