Exposing Long-Tail Safety Failures in Large Language Models through Efficient Diverse Response Sampling

📅 2026-03-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing safety alignment methods merely suppress, rather than eliminate, unsafe behaviors in large language models, leaving critical vulnerabilities in the long-tail distribution of responses. To address this, this work proposes Progressive Diverse Population Sampling (PDPS), a method that systematically uncovers safety failures in long-tail scenarios by exploring the output space to generate semantically diverse yet compact response subsets under fixed high-risk prompts. PDPS integrates stochastic token-level sampling with a diversity-aware selection mechanism, achieving attack success rates comparable to large-scale i.i.d. sampling at only 8%–29% of the computational cost across multiple jailbreaking benchmarks and open-source large language models. Moreover, under constrained response settings, PDPS further improves attack success rates by 26%–40%.

Technology Category

Application Category

📝 Abstract
Safety tuning through supervised fine-tuning and reinforcement learning from human feedback has substantially improved the robustness of large language models (LLMs). However, it often suppresses rather than eliminates unsafe behaviors, leaving rare but critical failures hidden in the long tail of the output distribution. While most red-teaming work emphasizes adversarial prompt search (input-space optimization), we show that safety failures can also be systematically exposed through diverse response generation (output-space exploration) for a fixed safety-critical prompt, where increasing the number and diversity of sampled responses can drive jailbreak success rates close to unity. To efficiently uncover such failures, we propose Progressive Diverse Population Sampling (PDPS), which combines stochastic token-level sampling with diversity-aware selection to explore a large candidate pool of responses and retain a compact, semantically diverse subset. Across multiple jailbreak benchmarks and open-source LLMs, PDPS achieves attack success rates comparable to large-scale IID sampling while using only 8% to 29% of the computational cost. Under limited-response settings, it improves success rates by 26% to 40% over IID sampling and Diverse Beam Search. Furthermore, responses generated by PDPS exhibit both a higher number and greater diversity of unsafe outputs, demonstrating its effectiveness in uncovering a broader range of failures.
Problem

Research questions and friction points this paper is trying to address.

long-tail safety failures
large language models
diverse response sampling
jailbreak
safety tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

diverse response sampling
long-tail safety failures
output-space exploration
Progressive Diverse Population Sampling
jailbreak detection
🔎 Similar Papers
No similar papers found.
S
Suvadeep Hajra
Department of Electrical Engineering, Indian Institute Of Technology Delhi, India
P
Palash Nandi
Department of Electrical Engineering, Indian Institute Of Technology Delhi, India
Tanmoy Chakraborty
Tanmoy Chakraborty
Associate Professor, IIT Delhi, India
Natural Language ProcessingLarge Language ModelsSocial Computing