🤖 AI Summary
To address jailbreaking attacks induced by malicious samples during large language model (LLM) customization, this paper proposes the first end-to-end defense framework centered on adaptive data curation, spanning pre-customization immunization, in-customization suppression, and post-customization repair. Methodologically, it innovatively establishes data curation as the core defense backbone—requiring no auxiliary modules—and integrates adversarial-aware data reweighting, embedded risk neutralization, and behavioral consistency restoration. Evaluated across multiple benchmarks, the framework achieves zero jailbreaking success rate and 100% safe response generation, substantially enhancing the robustness and controllability of customized LLMs. Its principal contribution is establishing data curation as a novel security paradigm for LLM customization and pioneering the first holistic, lifecycle-coordinated defense strategy against jailbreaking.
📝 Abstract
Large language models (LLMs) are widely adapted for downstream applications through fine-tuning, a process named customization. However, recent studies have identified a vulnerability during this process, where malicious samples can compromise the robustness of LLMs and amplify harmful behaviors-an attack commonly referred to as jailbreaking. To address this challenge, we propose an adaptive data curation approach allowing any text to be curated to enhance its effectiveness in counteracting harmful samples during customization. To avoid the need for additional defensive modules, we further introduce a comprehensive mitigation framework spanning the lifecycle of the customization process: before customization to immunize LLMs against future jailbreak attempts, during customization to neutralize risks, and after customization to restore compromised models. Experimental results demonstrate a significant reduction in jailbreaking effects, achieving up to a 100% success rate in generating safe responses. By combining adaptive data curation with lifecycle-based mitigation strategies, this work represents a solid step forward in mitigating jailbreaking risks and ensuring the secure adaptation of LLMs.