🤖 AI Summary
This work reveals that fine-tuning large language models (LLMs) solely on ostensibly harmless datasets can still severely degrade model safety. The root cause is identified as a small fraction of “high-risk benign samples”—semantically innocuous yet inherently unsafe instances—present in such datasets, which undermine safety alignment.
Method: We propose Self-Inf-N, an unsupervised, architecture-agnostic self-supervised anomaly detection framework that jointly leverages information entropy and gradient sensitivity to identify these high-risk samples.
Contribution/Results: Empirical evaluation shows that retraining with only 100 detected anomalous samples significantly increases harmful output rates across seven mainstream LLMs, while evading most existing alignment safeguards. The attack exhibits strong cross-model transferability. This work pioneers an anomaly-detection perspective for LLM safety risk modeling, establishing a novel paradigm for data quality assessment and robust alignment.
📝 Abstract
Recent studies have uncovered a troubling vulnerability in the fine-tuning stage of large language models (LLMs): even fine-tuning on entirely benign datasets can lead to a significant increase in the harmfulness of LLM outputs. Building on this finding, our red teaming study takes this threat one step further by developing a more effective attack. Specifically, we analyze and identify samples within benign datasets that contribute most to safety degradation, then fine-tune LLMs exclusively on these samples. We approach this problem from an outlier detection perspective and propose Self-Inf-N, to detect and extract outliers for fine-tuning. Our findings reveal that fine-tuning LLMs on 100 outlier samples selected by Self-Inf-N in the benign datasets severely compromises LLM safety alignment. Extensive experiments across seven mainstream LLMs demonstrate that our attack exhibits high transferability across different architectures and remains effective in practical scenarios. Alarmingly, our results indicate that most existing mitigation strategies fail to defend against this attack, underscoring the urgent need for more robust alignment safeguards. Codes are available at https://github.com/GuanZihan/Benign-Samples-Matter.