🤖 AI Summary
Large language models (LLMs) are vulnerable to jailbreaking attacks via adversarial context injection, posing significant safety and ethical risks. To address this, we propose a fine-tuning-free, plug-and-play input preprocessing mechanism that jointly performs intent recognition and contextual credibility assessment, integrated with adversarial example detection. This enables precise filtering of malicious contexts and robust identification of users’ genuine intentions. The mechanism is model-agnostic—compatible with both white-box and black-box LLMs—while preserving both safety and helpfulness. Extensive experiments across six representative jailbreaking attack categories demonstrate that our method reduces attack success rates by up to 88%. Moreover, it achieves the state-of-the-art trade-off between safety and utility, as measured by a composite safety-usefulness metric.
📝 Abstract
While Large Language Models (LLMs) have shown significant advancements in performance, various jailbreak attacks have posed growing safety and ethical risks. Malicious users often exploit adversarial context to deceive LLMs, prompting them to generate responses to harmful queries. In this study, we propose a new defense mechanism called Context Filtering model, an input pre-processing method designed to filter out untrustworthy and unreliable context while identifying the primary prompts containing the real user intent to uncover concealed malicious intent. Given that enhancing the safety of LLMs often compromises their helpfulness, potentially affecting the experience of benign users, our method aims to improve the safety of the LLMs while preserving their original performance. We evaluate the effectiveness of our model in defending against jailbreak attacks through comparative analysis, comparing our approach with state-of-the-art defense mechanisms against six different attacks and assessing the helpfulness of LLMs under these defenses. Our model demonstrates its ability to reduce the Attack Success Rates of jailbreak attacks by up to 88% while maintaining the original LLMs' performance, achieving state-of-the-art Safety and Helpfulness Product results. Notably, our model is a plug-and-play method that can be applied to all LLMs, including both white-box and black-box models, to enhance their safety without requiring any fine-tuning of the models themselves. We will make our model publicly available for research purposes.