Jailbreaking Large Language Models Through Alignment Vulnerabilities in Out-of-Distribution Settings

📅 2024-06-19
📈 Citations: 7
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a fundamental vulnerability in aligned large language models (LLMs): their ethical decision boundaries are fragile under out-of-distribution (OOD) conditions and highly susceptible to jailbreaking attacks. To address this, we propose ObscurePrompt—a general black-box jailbreaking framework that requires no white-box access or fixed templates. It iteratively applies text-based fuzzing perturbations to LLM-generated prompts, explicitly modeling and crossing ethical decision boundaries through multi-round optimization and adversarial robustness enhancement. Experiments demonstrate that ObscurePrompt achieves significantly higher attack success rates than state-of-the-art methods across mainstream aligned models, while remaining robust against prominent defenses—including RLHF combined with rejection sampling and Constitutional AI. Crucially, this is the first systematic study to expose boundary instability in aligned models under OOD settings, establishing a new paradigm for evaluating and improving robust alignment.

Technology Category

Application Category

📝 Abstract
Recently, Large Language Models (LLMs) have garnered significant attention for their exceptional natural language processing capabilities. However, concerns about their trustworthiness remain unresolved, particularly in addressing ``jailbreaking'' attacks on aligned LLMs. Previous research predominantly relies on scenarios involving white-box LLMs or specific, fixed prompt templates, which are often impractical and lack broad applicability. In this paper, we introduce a straightforward and novel method called ObscurePrompt for jailbreaking LLMs, inspired by the observed fragile alignments in Out-of-Distribution (OOD) data. Specifically, we first formulate the decision boundary in the jailbreaking process and then explore how obscure text affects LLM's ethical decision boundary. ObscurePrompt starts with constructing a base prompt that integrates well-known jailbreaking techniques. Powerful LLMs are then utilized to obscure the original prompt through iterative transformations, aiming to bolster the attack's robustness. Comprehensive experiments show that our approach substantially improves upon previous methods in terms of attack effectiveness, maintaining efficacy against two prevalent defense mechanisms.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Jailbreaking Attacks
Ethical Decision-making
Innovation

Methods, ideas, or system contributions that make the work stand out.

ObscurePrompt
LLMs jailbreaking
OOD data fragility
🔎 Similar Papers
No similar papers found.
Y
Yue Huang
University of Notre Dame, South Bend, USA
J
Jingyu Tang
Huazhong University of Science and Technology, Wuhan, China
D
Dongping Chen
Huazhong University of Science and Technology, Wuhan, China
Bingda Tang
Bingda Tang
Tsinghua University
Machine LearningArtificial IntelligenceComputer Science
Yao Wan
Yao Wan
Huazhong University of Science and Technology
NLPProgramming LanguagesSoftware EngineeringLarge Language Models
L
Lichao Sun
Lehigh University, Bethlehem, USA
Xiangliang Zhang
Xiangliang Zhang
Leonard C. Bettex Collegiate Professor, Computer Science and Engineering, University of Notre Dame
Machine LearningAI for Science