No, of course I can! Refusal Mechanisms Can Be Exploited Using Harmless Fine-Tuning Data

📅 2025-02-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work exposes a critical security vulnerability in large language models’ (LLMs) refusal mechanisms: attackers can bypass safety filters using seemingly benign fine-tuning data, inducing the model to generate harmful content. Method: We propose NOICE—the first data poisoning attack framework grounded in formal modeling of model refusal behavior—comprising refusal pattern analysis, adversarial sample construction, contrastive fine-tuning, and response prefix control, guided by aligned models. We further design a prefix pre-filling defense mechanism. Contribution/Results: NOICE achieves a 57% jailbreak success rate on GPT-4o, earning an OpenAI vulnerability bounty; on mainstream open-source LLMs, it increases average attack success rates by 3.25×. This is the first systematic study leveraging formal refusal-behavior modeling for data poisoning, offering novel insights into LLM safety alignment and practical, deployable defenses.

Technology Category

Application Category

📝 Abstract
Leading language model (LM) providers like OpenAI and Google offer fine-tuning APIs that allow customers to adapt LMs for specific use cases. To prevent misuse, these LM providers implement filtering mechanisms to block harmful fine-tuning data. Consequently, adversaries seeking to produce unsafe LMs via these APIs must craft adversarial training data that are not identifiably harmful. We make three contributions in this context: 1. We show that many existing attacks that use harmless data to create unsafe LMs rely on eliminating model refusals in the first few tokens of their responses. 2. We show that such prior attacks can be blocked by a simple defense that pre-fills the first few tokens from an aligned model before letting the fine-tuned model fill in the rest. 3. We describe a new data-poisoning attack, ``No, Of course I Can Execute'' (NOICE), which exploits an LM's formulaic refusal mechanism to elicit harmful responses. By training an LM to refuse benign requests on the basis of safety before fulfilling those requests regardless, we are able to jailbreak several open-source models and a closed-source model (GPT-4o). We show an attack success rate (ASR) of 57% against GPT-4o; our attack earned a Bug Bounty from OpenAI. Against open-source models protected by simple defenses, we improve ASRs by an average of 3.25 times compared to the best performing previous attacks that use only harmless data. NOICE demonstrates the exploitability of repetitive refusal mechanisms and broadens understanding of the threats closed-source models face from harmless data.
Problem

Research questions and friction points this paper is trying to address.

Exploiting refusal mechanisms in language models
Blocking harmful fine-tuning with simple defenses
New data-poisoning attack using harmless data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Exploiting harmless fine-tuning data
Pre-filling tokens for defense
NOICE attack on refusal mechanisms
🔎 Similar Papers
No similar papers found.