Fine-Tuning Lowers Safety and Disrupts Evaluation Consistency

📅 2025-06-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Fine-tuning general-purpose large language models (LLMs) systematically degrades their safety alignment—even when training data is entirely benign—revealing a previously uncharacterized “safety degradation attack surface.” Method: The study conducts controlled multi-round fine-tuning, cross-benchmark evaluation (ToxiGen, SafeRLHF), statistical significance testing, and robustness analysis across random seeds and prompt formats. Contribution/Results: Fine-tuning induces average safety performance drops of 12–37%. Notably, safety scores for the same model vary with standard deviation up to ±0.28 across seeds or prompt formulations, exposing fundamental non-reproducibility and high sensitivity of mainstream safety evaluations to experimental details. The work establishes fine-tuning itself as an intrinsic safety risk and identifies evaluation inconsistency as a core bottleneck impeding comparability in LLM safety research. It provides both theoretical grounding and methodological support for developing robust, trustworthy LLM safety assessment frameworks.

Technology Category

Application Category

📝 Abstract
Fine-tuning a general-purpose large language model (LLM) for a specific domain or task has become a routine procedure for ordinary users. However, fine-tuning is known to remove the safety alignment features of the model, even when the fine-tuning data does not contain any harmful content. We consider this to be a critical failure mode of LLMs due to the widespread uptake of fine-tuning, combined with the benign nature of the"attack". Most well-intentioned developers are likely unaware that they are deploying an LLM with reduced safety. On the other hand, this known vulnerability can be easily exploited by malicious actors intending to bypass safety guardrails. To make any meaningful progress in mitigating this issue, we first need reliable and reproducible safety evaluations. In this work, we investigate how robust a safety benchmark is to trivial variations in the experimental procedure, and the stochastic nature of LLMs. Our initial experiments expose surprising variance in the results of the safety evaluation, even when seemingly inconsequential changes are made to the fine-tuning setup. Our observations have serious implications for how researchers in this field should report results to enable meaningful comparisons in the future.
Problem

Research questions and friction points this paper is trying to address.

Fine-tuning removes LLM safety alignment features
Safety benchmarks lack robustness to trivial variations
Unreliable safety evaluations hinder meaningful comparisons
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuning removes LLM safety alignment features
Safety evaluation results vary with trivial changes
Need reliable safety benchmarks for comparisons
🔎 Similar Papers
No similar papers found.