Jailbreak-Tuning: Models Efficiently Learn Jailbreak Susceptibility

📅 2025-07-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study reveals that large language models (LLMs), after fine-tuning, become vulnerable to malicious exploitation—circumventing safety alignment mechanisms to generate high-risk content such as CBRN (chemical, biological, radiological, nuclear) threats and cyberattack instructions. Method: We propose *jailbreak-tuning*, the first attack framework enabling high-fidelity, full-functionality jailbreaking of both mainstream open-weight models and closed-source API-based fine-tuned models. It jointly optimizes fine-tuning strategies (including backdoor injection), jailbreak prompt engineering, and input-space perturbations—achieving coordinated bypass of modern content moderation systems at both weight and prompt levels. Contribution/Results: Experiments demonstrate a significant and consistent increase in attack success rate across model generations, with stable elicitation of hazardous outputs. This work provides the first systematic empirical evidence that fine-tuning constitutes a critical alignment vulnerability, underscoring the urgent need for tamper-resistant, robust alignment techniques.

Technology Category

Application Category

📝 Abstract
AI systems are rapidly advancing in capability, and frontier model developers broadly acknowledge the need for safeguards against serious misuse. However, this paper demonstrates that fine-tuning, whether via open weights or closed fine-tuning APIs, can produce helpful-only models. In contrast to prior work which is blocked by modern moderation systems or achieved only partial removal of safeguards or degraded output quality, our jailbreak-tuning method teaches models to generate detailed, high-quality responses to arbitrary harmful requests. For example, OpenAI, Google, and Anthropic models will fully comply with requests for CBRN assistance, executing cyberattacks, and other criminal activity. We further show that backdoors can increase not only the stealth but also the severity of attacks, while stronger jailbreak prompts become even more effective in fine-tuning attacks, linking attack and potentially defenses in the input and weight spaces. Not only are these models vulnerable, more recent ones also appear to be becoming even more vulnerable to these attacks, underscoring the urgent need for tamper-resistant safeguards. Until such safeguards are discovered, companies and policymakers should view the release of any fine-tunable model as simultaneously releasing its evil twin: equally capable as the original model, and usable for any malicious purpose within its capabilities.
Problem

Research questions and friction points this paper is trying to address.

Fine-tuning creates models vulnerable to harmful misuse
Jailbreak-tuning bypasses safeguards for high-quality malicious responses
Modern models increasingly susceptible to stealthy severe attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Jailbreak-tuning enables harmful request compliance
Backdoors enhance attack stealth and severity
Stronger prompts boost fine-tuning attack effectiveness
🔎 Similar Papers
No similar papers found.