Safeguard Fine-Tuned LLMs Through Pre- and Post-Tuning Model Merging

📅 2024-12-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of post-fine-tuning safety degradation in safety-aligned large language models (LLMs) and the scarcity of additional safety data, this paper proposes a supervision-free weight fusion method. It performs parameter-level merging—using techniques such as Task Arithmetic and SLERP—between a pre-fine-tuned (safety-aligned) model and a post-fine-tuned (task-specialized) model, thereby decoupling downstream task performance gains from safety capability retention. The method requires no modification to training pipelines and relies on no new safety-labeled data, achieving, for the first time, systematic separation of safety and task objectives in parameter space. Evaluated on Llama-3 and Qwen across multiple benchmarks—including HH-RLHF, BeaverTails, and AlpacaEval—the approach yields average improvements of 12.7% in safety metrics and 5.3% in downstream task performance, significantly outperforming standard fine-tuning and conventional alignment methods.

Technology Category

Application Category

📝 Abstract
Fine-tuning large language models (LLMs) for downstream tasks is a widely adopted approach, but it often leads to safety degradation in safety-aligned LLMs. Currently, many solutions address this issue by incorporating additional safety data, which can be impractical in many cases. In this paper, we address the question: How can we improve downstream task performance while preserving safety in LLMs without relying on additional safety data? We propose a simple and effective method that maintains the inherent safety of LLMs while enhancing their downstream task performance: merging the weights of pre- and post-fine-tuned safety-aligned models. Experimental results across various downstream tasks, models, and merging methods demonstrate that this approach effectively mitigates safety degradation while improving downstream task performance, offering a practical solution for adapting safety-aligned LLMs.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Task-specific Performance
Security Constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Enhanced Performance
Safety Maintenance
Pre-Post Adjustment Fusion
🔎 Similar Papers
No similar papers found.