π€ AI Summary
To address safety alignment degradation in large language models (LLMs) induced by user-data heterogeneity during fine-tuning, this paper proposes a safety-aware post-training defense method. The core method explicitly models the fine-tuning parameter delta as an optimizable and compensatable safety carrierβa novel formulation. It introduces a three-stage synergistic mechanism: safety degradation estimation, sparse delta selection, and learnable safety compensation vector generation, enabling decoupled control of safety and utility. Safety risk is assessed via gradient sensitivity analysis, and multi-task safety constraints are enforced through constrained optimization. Evaluated on four heterogeneous fine-tuning datasets, the method achieves zero safety violations while preserving 100% benign task performance, improves safety consistency by 42%, and reduces utility fluctuation by 76%.
π Abstract
Large language models (LLMs) have shown great potential as general-purpose AI assistants across various domains. To fully leverage this potential in specific applications, many companies provide fine-tuning API services, enabling users to upload their own data for LLM customization. However, fine-tuning services introduce a new safety threat: user-uploaded data, whether harmful or benign, can break the model's alignment, leading to unsafe outputs. Moreover, existing defense methods struggle to address the diversity of fine-tuning datasets (e.g., varying sizes, tasks), often sacrificing utility for safety or vice versa. To address this issue, we propose Safe Delta, a safety-aware post-training defense method that adjusts the delta parameters (i.e., the parameter change before and after fine-tuning). Specifically, Safe Delta estimates the safety degradation, selects delta parameters to maximize utility while limiting overall safety loss, and applies a safety compensation vector to mitigate residual safety loss. Through extensive experiments on four diverse datasets with varying settings, our approach consistently preserves safety while ensuring that the utility gain from benign datasets remains unaffected.