🤖 AI Summary
This work addresses the threat of harmful fine-tuning attacks arising from user-submitted toxic data in "fine-tuning-as-a-service" scenarios. To mitigate this risk, the authors propose a safety-preserving fine-tuning approach that first aligns the model with safety constraints prior to adaptation and then dynamically suppresses the influence of harmful samples during training through a combination of sample weighting and gradient regularization. Additionally, the method enhances robustness by steering optimization toward flatter regions of the loss landscape with respect to harmful examples. Experimental results demonstrate that the proposed technique effectively alleviates harmful fine-tuning attacks while simultaneously improving fine-tuning performance on user data and maintaining overall model safety.
📝 Abstract
Fine-tuning-as-a-service introduces a threat to Large Language Models' safety when service providers fine-tune their models on poisoned user-submitted datasets, a process known as harmful fine-tuning attacks. In this work, we show that by regularizing the gradient contribution of harmful samples encountered during fine-tuning, we can effectively mitigate the impact of harmful fine-tuning attacks. To this end, we introduce Antibody, a defense strategy that first ensures robust safety alignment for the model before fine-tuning, and then applies a safety-preservation learning algorithm during fine-tuning. Specifically, in the alignment stage before fine-tuning, we propose optimizing the model to be in a flat loss region with respect to harmful samples, which makes the safety alignment more resilient to subsequent harmful fine-tuning. Then, in the fine-tuning stage, we design a fine-tuning algorithm that applies a weighting scheme to all samples in each training batch to inhibit the model from learning from harmful samples while encouraging learning from benign samples. Experimental results demonstrate that Antibody successfully mitigates harmful fine-tuning attacks while boosting fine-tuning performance on the user-submitted dataset.