🤖 AI Summary
Existing red-teaming methods for large language model (LLM) safety evaluation suffer from difficulties in targeted vulnerability discovery and lack of control over prompt perturbations. Method: This paper proposes DART, a proximity-constrained red-teaming framework leveraging text diffusion. DART is the first to introduce diffusion-based semantic perturbation in embedding space—applying controllable, gradient-free modifications to reference prompts while preserving topic and stylistic proximity. It formulates the attack as a black-box optimization problem, circumventing inefficiencies inherent in autoregressive constrained optimization. Contribution/Results: Experiments demonstrate that DART significantly outperforms fine-tuning and zero-/few-shot prompting baselines across multiple benchmarks. It efficiently discovers highly harmful inputs proximal to given references, validating the effectiveness, controllability, and practicality of proximity-constrained red-teaming for LLM safety assessment.
📝 Abstract
Recent work has proposed automated red-teaming methods for testing the vulnerabilities of a given target large language model (LLM). These methods use red-teaming LLMs to uncover inputs that induce harmful behavior in a target LLM. In this paper, we study red-teaming strategies that enable a targeted security assessment. We propose an optimization framework for red-teaming with proximity constraints, where the discovered prompts must be similar to reference prompts from a given dataset. This dataset serves as a template for the discovered prompts, anchoring the search for test-cases to specific topics, writing styles, or types of harmful behavior. We show that established auto-regressive model architectures do not perform well in this setting. We therefore introduce a black-box red-teaming method inspired by text-diffusion models: Diffusion for Auditing and Red-Teaming (DART). DART modifies the reference prompt by perturbing it in the embedding space, directly controlling the amount of change introduced. We systematically evaluate our method by comparing its effectiveness with established methods based on model fine-tuning and zero- and few-shot prompting. Our results show that DART is significantly more effective at discovering harmful inputs in close proximity to the reference prompt.