How Few-shot Demonstrations Affect Prompt-based Defenses Against LLM Jailbreak Attacks

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the impact of few-shot exemplars on the effectiveness of prompt-based defenses against jailbreaking attacks on large language models, with a focus on their interaction with role-oriented and task-oriented prompting strategies. Through systematic evaluation across four safety benchmarks—AdvBench, HarmBench, SG-Bench, and XSTest—using six jailbreaking attack methods on multiple mainstream large language models, the work reveals for the first time that few-shot exemplars exert opposing effects on the two prompting paradigms: they improve safety rates by up to 4.5% in role-oriented prompts but degrade defense efficacy by as much as 21.2% in task-oriented prompts. Based on these findings, the paper offers practical recommendations for optimizing prompt-based defenses in real-world deployments.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) face increasing threats from jailbreak attacks that bypass safety alignment. While prompt-based defenses such as Role-Oriented Prompts (RoP) and Task-Oriented Prompts (ToP) have shown effectiveness, the role of few-shot demonstrations in these defense strategies remains unclear. Prior work suggests that few-shot examples may compromise safety, but lacks investigation into how few-shot interacts with different system prompt strategies. In this paper, we conduct a comprehensive evaluation on multiple mainstream LLMs across four safety benchmarks (AdvBench, HarmBench, SG-Bench, XSTest) using six jailbreak attack methods. Our key finding reveals that few-shot demonstrations produce opposite effects on RoP and ToP: few-shot enhances RoP's safety rate by up to 4.5% through reinforcing role identity, while it degrades ToP's effectiveness by up to 21.2% through distracting attention from task instructions. Based on these findings, we provide practical recommendations for deploying prompt-based defenses in real-world LLM applications.
Problem

Research questions and friction points this paper is trying to address.

few-shot demonstrations
prompt-based defenses
LLM jailbreak attacks
Role-Oriented Prompts
Task-Oriented Prompts
Innovation

Methods, ideas, or system contributions that make the work stand out.

few-shot demonstrations
prompt-based defenses
LLM jailbreak attacks
Role-Oriented Prompts
Task-Oriented Prompts
🔎 Similar Papers
No similar papers found.