🤖 AI Summary
This study investigates the impact of few-shot exemplars on the effectiveness of prompt-based defenses against jailbreaking attacks on large language models, with a focus on their interaction with role-oriented and task-oriented prompting strategies. Through systematic evaluation across four safety benchmarks—AdvBench, HarmBench, SG-Bench, and XSTest—using six jailbreaking attack methods on multiple mainstream large language models, the work reveals for the first time that few-shot exemplars exert opposing effects on the two prompting paradigms: they improve safety rates by up to 4.5% in role-oriented prompts but degrade defense efficacy by as much as 21.2% in task-oriented prompts. Based on these findings, the paper offers practical recommendations for optimizing prompt-based defenses in real-world deployments.
📝 Abstract
Large Language Models (LLMs) face increasing threats from jailbreak attacks that bypass safety alignment. While prompt-based defenses such as Role-Oriented Prompts (RoP) and Task-Oriented Prompts (ToP) have shown effectiveness, the role of few-shot demonstrations in these defense strategies remains unclear. Prior work suggests that few-shot examples may compromise safety, but lacks investigation into how few-shot interacts with different system prompt strategies. In this paper, we conduct a comprehensive evaluation on multiple mainstream LLMs across four safety benchmarks (AdvBench, HarmBench, SG-Bench, XSTest) using six jailbreak attack methods. Our key finding reveals that few-shot demonstrations produce opposite effects on RoP and ToP: few-shot enhances RoP's safety rate by up to 4.5% through reinforcing role identity, while it degrades ToP's effectiveness by up to 21.2% through distracting attention from task instructions. Based on these findings, we provide practical recommendations for deploying prompt-based defenses in real-world LLM applications.