🤖 AI Summary
To address the threat of malicious users injecting harmful samples during LLM fine-tuning—thereby compromising model safety alignment—this paper proposes Gradient Surgery, a framework that formulates safe fine-tuning as a multi-objective optimization problem jointly optimizing user tasks and safety objectives. Its core innovation lies in detecting gradient conflicts between task and safety objectives and orthogonally projecting harmful gradient components onto the safety-aligned subspace. Additionally, a KL-divergence-based alignment loss preserves the base model’s inherent safety distribution, enabling decoupled training of task performance and safety alignment. Experiments across multiple large language models (Llama-2/3, Qwen) and benchmark datasets demonstrate that Gradient Surgery maintains robust safety even under 30% poison rate, significantly outperforming existing defenses while preserving downstream task accuracy.
📝 Abstract
Fine-tuning-as-a-Service introduces a critical vulnerability where a few malicious examples mixed into the user's fine-tuning dataset can compromise the safety alignment of Large Language Models (LLMs). While a recognized paradigm frames safe fine-tuning as a multi-objective optimization problem balancing user task performance with safety alignment, we find existing solutions are critically sensitive to the harmful ratio, with defenses degrading sharply as harmful ratio increases. We diagnose that this failure stems from conflicting gradients, where the user-task update directly undermines the safety objective. To resolve this, we propose SafeGrad, a novel method that employs gradient surgery. When a conflict is detected, SafeGrad nullifies the harmful component of the user-task gradient by projecting it onto the orthogonal plane of the alignment gradient, allowing the model to learn the user's task without sacrificing safety. To further enhance robustness and data efficiency, we employ a KL-divergence alignment loss that learns the rich, distributional safety profile of the well-aligned foundation model. Extensive experiments show that SafeGrad provides state-of-the-art defense across various LLMs and datasets, maintaining robust safety even at high harmful ratios without compromising task fidelity.