Stealthy Jailbreak Attacks on Large Language Models via Benign Data Mirroring

📅 2024-10-28
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the vulnerability of malicious prompts to content moderation and their insufficient stealth in black-box jailbreaking attacks, this paper proposes a stealthy attack paradigm that avoids submitting detectable malicious instructions. Our method first distills benign data to construct a lightweight surrogate model approximating the target LLM’s behavior; this surrogate then guides transfer-based prompt optimization to efficiently discover jailbreaking prompts without triggering moderation. Crucially, the approach eliminates reliance on target-model feedback and reduces the risk of high-frequency malicious queries inherent in conventional black-box methods. Experiments on an AdvBench subset against GPT-3.5 Turbo achieve a 92% jailbreaking success rate, requiring only 1.5 detectable queries on average. The method attains an 80% balance score between success rate and stealth, significantly enhancing both practicality and robustness of jailbreaking attacks.

Technology Category

Application Category

📝 Abstract
Large language model (LLM) safety is a critical issue, with numerous studies employing red team testing to enhance model security. Among these, jailbreak methods explore potential vulnerabilities by crafting malicious prompts that induce model outputs contrary to safety alignments. Existing black-box jailbreak methods often rely on model feedback, repeatedly submitting queries with detectable malicious instructions during the attack search process. Although these approaches are effective, the attacks may be intercepted by content moderators during the search process. We propose an improved transfer attack method that guides malicious prompt construction by locally training a mirror model of the target black-box model through benign data distillation. This method offers enhanced stealth, as it does not involve submitting identifiable malicious instructions to the target model during the search phase. Our approach achieved a maximum attack success rate of 92%, or a balanced value of 80% with an average of 1.5 detectable jailbreak queries per sample against GPT-3.5 Turbo on a subset of AdvBench. These results underscore the need for more robust defense mechanisms.
Problem

Research questions and friction points this paper is trying to address.

Enhancing stealth in jailbreak attacks on LLMs
Reducing detectable malicious queries during attacks
Improving transfer attack success rates on black-box models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses benign data distillation for mirror model training
Enhances stealth by avoiding detectable malicious queries
Achieves high attack success rate on GPT-3.5 Turbo
🔎 Similar Papers
No similar papers found.
H
Honglin Mu
Harbin Institute of Technology
Han He
Han He
Emory University
Natural Language Processing
Yuxin Zhou
Yuxin Zhou
University of California, Riverside
CombustionNanoparticlesMolecular dynamicsAerosol
Y
ylfeng
Harbin Institute of Technology
Y
Yang Xu
Harbin Institute of Technology
L
Libo Qin
Central South University
X
Xiaoming Shi
East China Normal University
Z
Zeming Liu
Beihang University
X
Xudong Han
MBZUAI
Q
Qi Shi
Tsinghua University
Qingfu Zhu
Qingfu Zhu
Harbin Institute of Technology
NLPCode LLM
Wanxiang Che
Wanxiang Che
Professor of Harbin Institute of Technology
Natural Language Processing