🤖 AI Summary
This work addresses the security vulnerability of large language models (LLMs) to jailbreaking attacks and their propensity to generate harmful content. We propose an efficient, low-overhead controllable jailbreaking method. Our core innovation is the first identification of底层 layers whose parameters exhibit higher sensitivity to harmful outputs—achieved via layer-wise parameter statistics and a composite sensitivity scoring mechanism. Based on this insight, we introduce Selective Supervised Fine-Tuning (Selective SFT), which fine-tunes only the most sensitive bottom layers while freezing all others. Experiments demonstrate that our approach reduces training time and GPU memory consumption by approximately 60% compared to full-layer LoRA, while maintaining high jailbreaking success rates and harm scores. The method exhibits strong generalizability across multiple open-source LLMs and consistently outperforms state-of-the-art jailbreaking techniques.
📝 Abstract
With the widespread application of Large Language Models across various domains, their security issues have increasingly garnered significant attention from both academic and industrial communities. This study conducts sampling and normalization of the parameters of the LLM to generate visual representations and heatmaps of parameter distributions, revealing notable discrepancies in parameter distributions among certain layers within the hidden layers. Further analysis involves calculating statistical metrics for each layer, followed by the computation of a Comprehensive Sensitivity Score based on these metrics, which identifies the lower layers as being particularly sensitive to the generation of harmful content. Based on this finding, we employ a Freeze training strategy, selectively performing Supervised Fine-Tuning only on the lower layers. Experimental results demonstrate that this method significantly reduces training duration and GPU memory consumption while maintaining a high jailbreak success rate and a high harm score, outperforming the results achieved by applying the LoRA method for SFT across all layers. Additionally, the method has been successfully extended to other open-source large models, validating its generality and effectiveness across different model architectures. Furthermore, we compare our method with ohter jailbreak method, demonstrating the superior performance of our approach. By innovatively proposing a method to statistically analyze and compare large model parameters layer by layer, this study provides new insights into the interpretability of large models. These discoveries emphasize the necessity of continuous research and the implementation of adaptive security measures in the rapidly evolving field of LLMs to prevent potential jailbreak attack risks, thereby promoting the development of more robust and secure LLMs.