LLMs Can Unlearn Refusal with Only 1,000 Benign Samples

📅 2026-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study uncovers a critical vulnerability in the safety alignment of large language models (LLMs): their reliance on fixed refusal prefixes leads to templated rejections rather than genuine understanding of harmful instructions. To address this, the authors introduce the concept of “refusal forgetting” and propose a lightweight fine-tuning method that prepends a refusal prefix to responses using only 1,000 benign samples, thereby disrupting the model’s learned refusal completion pathway. Experiments across 16 mainstream models—including Llama, Qwen, Gemma, Gemini, and GPT—demonstrate that this approach consistently and significantly degrades safety refusal capabilities while preserving general functionality. The effect is not attributable to standard fine-tuning or random prefixes, suggesting that current alignment mechanisms may depend primarily on token-sequence memorization rather than deep reasoning.

Technology Category

Application Category

📝 Abstract
This study reveals a previously unexplored vulnerability in the safety alignment of Large Language Models (LLMs). Existing aligned LLMs predominantly respond to unsafe queries with refusals, which often begin with a fixed set of prefixes (I'm sorry). We demonstrate that this rigid refusal pattern is a vulnerability and introduce a novel \textbf{refusal unlearning} technique that exploits it. Specifically, we fine-tune LLMs using merely 1,000 benign samples, where each response is prepended with a refusal prefix. The underlying intuition is to disrupt the refusal completion pathway, thereby driving the model to forget how to refuse while following harmful instructions. This intuition is further supported by theoretical proofs. We apply this approach to a total of 16 LLMs, including various open-source models from Llama, Qwen, and Gemma families, as well as closed-source models such as Gemini and GPT. Experimental results show that the safety scores of previously aligned LLMs degrade both consistently and substantially. Importantly, we verify that the observed gain cannot be attributed to plain fine-tuning or random prefix effects. Our findings suggest that current safety alignment may rely heavily on token sequence memorization rather than reasoning, motivating future work beyond simple refusal mechanisms. Code has been released: https://github.com/guoyang9/refusal-unlearning.
Problem

Research questions and friction points this paper is trying to address.

refusal unlearning
safety alignment
large language models
refusal prefix
model vulnerability
Innovation

Methods, ideas, or system contributions that make the work stand out.

refusal unlearning
safety alignment
prefix-based refusal
large language models
adversarial fine-tuning
🔎 Similar Papers