Improving LLM Safety Alignment with Dual-Objective Optimization

📅 2025-03-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current safety alignment methods for LLMs (e.g., DPO) exhibit insufficient robustness against jailbreaking attacks, particularly in simultaneously achieving reliable refusal responses and effective erasure of harmful knowledge. This paper proposes a dual-objective decoupled optimization framework: (1) reward-guided token-level weighted refusal learning to strengthen adversarial refusal robustness; and (2) targeted reverse forgetting of harmful knowledge to decouple refusal behavior modeling from knowledge erasure. Theoretical analysis establishes formal connections between refusal robustness, token distribution shift, and internal representation degradation. Experiments demonstrate substantial improvements in defense performance across prefill-, suffix-, and multi-turn jailbreaking attacks, while maintaining strong generalization under both in-distribution and out-of-distribution settings. The implementation is publicly available.

Technology Category

Application Category

📝 Abstract
Existing training-time safety alignment techniques for large language models (LLMs) remain vulnerable to jailbreak attacks. Direct preference optimization (DPO), a widely deployed alignment method, exhibits limitations in both experimental and theoretical contexts as its loss function proves suboptimal for refusal learning. Through gradient-based analysis, we identify these shortcomings and propose an improved safety alignment that disentangles DPO objectives into two components: (1) robust refusal training, which encourages refusal even when partial unsafe generations are produced, and (2) targeted unlearning of harmful knowledge. This approach significantly increases LLM robustness against a wide range of jailbreak attacks, including prefilling, suffix, and multi-turn attacks across both in-distribution and out-of-distribution scenarios. Furthermore, we introduce a method to emphasize critical refusal tokens by incorporating a reward-based token-level weighting mechanism for refusal learning, which further improves the robustness against adversarial exploits. Our research also suggests that robustness to jailbreak attacks is correlated with token distribution shifts in the training process and internal representations of refusal and harmful tokens, offering valuable directions for future research in LLM safety alignment. The code is available at https://github.com/wicai24/DOOR-Alignment
Problem

Research questions and friction points this paper is trying to address.

Enhances LLM safety against jailbreak attacks
Improves refusal learning with dual-objective optimization
Introduces token-level weighting for better refusal robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dual-objective optimization for LLM safety alignment
Robust refusal training and harmful knowledge unlearning
Reward-based token-level weighting for refusal learning
🔎 Similar Papers
No similar papers found.