Mitigating Jailbreaks with Intent-Aware LLMs

๐Ÿ“… 2025-08-16
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Despite safety alignment, large language models (LLMs) remain vulnerable to adversarial instruction-based jailbreaking attacks, revealing an inherent trade-off between security and task performance. This paper proposes Intent-FT, a lightweight, intent-aware safety fine-tuning method that introduces explicit intent modeling into LLM security training for the first timeโ€”enabling models to recognize and generalize across previously unseen attack intents. Intent-FT supports cross-model intent knowledge transfer and establishes a multi-dimensional evaluation framework covering both parametric and non-parametric attack settings. Experiments demonstrate that Intent-FT reduces success rates of mainstream jailbreaking attacks to below 50%, significantly outperforming existing defenses. Moreover, it substantially lowers false rejection rates on benign instructions, achieving both strong robustness and high practical utility.

Technology Category

Application Category

๐Ÿ“ Abstract
Despite extensive safety-tuning, large language models (LLMs) remain vulnerable to jailbreak attacks via adversarially crafted instructions, reflecting a persistent trade-off between safety and task performance. In this work, we propose Intent-FT, a simple and lightweight fine-tuning approach that explicitly trains LLMs to infer the underlying intent of an instruction before responding. By fine-tuning on a targeted set of adversarial instructions, Intent-FT enables LLMs to generalize intent deduction to unseen attacks, thereby substantially improving their robustness. We comprehensively evaluate both parametric and non-parametric attacks across open-source and proprietary models, considering harmfulness from attacks, utility, over-refusal, and impact against white-box threats. Empirically, Intent-FT consistently mitigates all evaluated attack categories, with no single attack exceeding a 50% success rate -- whereas existing defenses remain only partially effective. Importantly, our method preserves the model's general capabilities and reduces excessive refusals on benign instructions containing superficially harmful keywords. Furthermore, models trained with Intent-FT accurately identify hidden harmful intent in adversarial attacks, and these learned intentions can be effectively transferred to enhance vanilla model defenses.
Problem

Research questions and friction points this paper is trying to address.

Mitigating jailbreak attacks on LLMs
Improving intent inference for adversarial instructions
Balancing safety and performance in language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tunes LLMs for intent inference
Generalizes intent deduction to unseen attacks
Preserves model capabilities and reduces refusals
๐Ÿ”Ž Similar Papers
No similar papers found.