🤖 AI Summary
Existing LLM safety evaluations suffer from a trade-off between attack diversity and intent clarity: static templates lack diversity, while dynamic templates compromise interpretability and reproducibility—hindering reliable red-teaming and defense assessment.
Method: We propose Embedded Jailbreak Templates—a novel approach that naturally integrates harmful queries into stable, fixed structural scaffolds, preserving both semantic intent and adversarial diversity. We further introduce context-aware fusion and progressive prompt engineering to enhance template authenticity and controllability, alongside a standardized generation and evaluation protocol to ensure quality and consistency.
Contribution/Results: Experiments demonstrate that our method significantly improves the realism and reproducibility of jailbreak benchmarks, enabling more effective red-team testing and robust regression validation of defensive strategies. The framework advances systematic, scalable, and scientifically rigorous LLM safety evaluation.
📝 Abstract
As the use of large language models (LLMs) continues to expand, ensuring their safety and robustness has become a critical challenge. In particular, jailbreak attacks that bypass built-in safety mechanisms are increasingly recognized as a tangible threat across industries, driving the need for diverse templates to support red-teaming efforts and strengthen defensive techniques. However, current approaches predominantly rely on two limited strategies: (i) substituting harmful queries into fixed templates, and (ii) having the LLM generate entire templates, which often compromises intent clarity and reproductibility. To address this gap, this paper introduces the Embedded Jailbreak Template, which preserves the structure of existing templates while naturally embedding harmful queries within their context. We further propose a progressive prompt-engineering methodology to ensure template quality and consistency, alongside standardized protocols for generation and evaluation. Together, these contributions provide a benchmark that more accurately reflects real-world usage scenarios and harmful intent, facilitating its application in red-teaming and policy regression testing.