๐ค AI Summary
Despite safety alignment, large language models (LLMs) remain vulnerable to adversarial instruction-based jailbreaking attacks, revealing an inherent trade-off between security and task performance. This paper proposes Intent-FT, a lightweight, intent-aware safety fine-tuning method that introduces explicit intent modeling into LLM security training for the first timeโenabling models to recognize and generalize across previously unseen attack intents. Intent-FT supports cross-model intent knowledge transfer and establishes a multi-dimensional evaluation framework covering both parametric and non-parametric attack settings. Experiments demonstrate that Intent-FT reduces success rates of mainstream jailbreaking attacks to below 50%, significantly outperforming existing defenses. Moreover, it substantially lowers false rejection rates on benign instructions, achieving both strong robustness and high practical utility.
๐ Abstract
Despite extensive safety-tuning, large language models (LLMs) remain vulnerable to jailbreak attacks via adversarially crafted instructions, reflecting a persistent trade-off between safety and task performance. In this work, we propose Intent-FT, a simple and lightweight fine-tuning approach that explicitly trains LLMs to infer the underlying intent of an instruction before responding. By fine-tuning on a targeted set of adversarial instructions, Intent-FT enables LLMs to generalize intent deduction to unseen attacks, thereby substantially improving their robustness. We comprehensively evaluate both parametric and non-parametric attacks across open-source and proprietary models, considering harmfulness from attacks, utility, over-refusal, and impact against white-box threats. Empirically, Intent-FT consistently mitigates all evaluated attack categories, with no single attack exceeding a 50% success rate -- whereas existing defenses remain only partially effective. Importantly, our method preserves the model's general capabilities and reduces excessive refusals on benign instructions containing superficially harmful keywords. Furthermore, models trained with Intent-FT accurately identify hidden harmful intent in adversarial attacks, and these learned intentions can be effectively transferred to enhance vanilla model defenses.