SelfDefend: LLMs Can Defend Themselves against Jailbreaking in a Practical Manner

📅 2024-06-08
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) face diverse jailbreaking attacks—including indirect, multilingual, and adaptive variants—yet existing defenses struggle to simultaneously achieve generality, low latency, and model-agnosticism. To address this, we propose SelfDefend, the first collaborative defense framework leveraging a lightweight “shadow LLM” paradigm: it harnesses the inherent discriminative capability of mainstream LLMs for zero-intrusion, state-isolated, real-time jailbreak intent detection. Our approach integrates data distillation fine-tuning, checkpoint-based access control, and multi-model-compatible deployment. Evaluated across GPT-3.5/4, Claude, Llama-2, and Mistral, SelfDefend outperforms seven state-of-the-art methods, reducing average inference latency by over 60% while maintaining >92% detection accuracy against sophisticated jailbreaking attacks. Crucially, it provides the first engineering-scale validation of the key hypothesis that “LLMs can self-detect harmful intent.”

Technology Category

Application Category

📝 Abstract
Jailbreaking is an emerging adversarial attack that bypasses the safety alignment deployed in off-the-shelf large language models (LLMs) and has evolved into multiple categories: human-based, optimization-based, generation-based, and the recent indirect and multilingual jailbreaks. However, delivering a practical jailbreak defense is challenging because it needs to not only handle all the above jailbreak attacks but also incur negligible delays to user prompts, as well as be compatible with both open-source and closed-source LLMs. Inspired by how the traditional security concept of shadow stacks defends against memory overflow attacks, this paper introduces a generic LLM jailbreak defense framework called SelfDefend, which establishes a shadow LLM as a defense instance (in detection state) to concurrently protect the target LLM instance (in normal answering state) in the normal stack and collaborate with it for checkpoint-based access control. The effectiveness of SelfDefend builds upon our observation that existing LLMs can identify harmful prompts or intentions in user queries, which we empirically validate using mainstream GPT-3.5/4 models against major jailbreak attacks. To further improve the defense's robustness and minimize costs, we employ a data distillation approach to tune dedicated open-source defense models. When deployed to protect GPT-3.5/4, Claude, Llama-2-7b/13b, and Mistral, these models outperform seven state-of-the-art defenses and match the performance of GPT-4-based SelfDefend, with significantly lower extra delays. Further experiments show that the tuned models are robust to adaptive jailbreaks and prompt injections.
Problem

Research questions and friction points this paper is trying to address.

Develops a defense against LLM jailbreaking attacks
Ensures compatibility with diverse LLM architectures
Minimizes user prompt response delays
Innovation

Methods, ideas, or system contributions that make the work stand out.

SelfDefend framework
shadow LLM defense
data distillation tuning
🔎 Similar Papers
No similar papers found.