🤖 AI Summary
Addressing the challenge of balancing attack diversity and effectiveness in large language model (LLM) automated red-teaming, this paper proposes a target-driven, two-stage, multi-step reinforcement learning framework. First, an LLM autonomously generates diverse adversarial objectives; second, a PPO variant jointly optimizes dual-path adversarial prompts—prompt injection and harmful response elicitation. We introduce two novel reward mechanisms: Rule-Based Reward (RBR), which encodes safety-critical constraints, and Historical Diversity Reward, which penalizes redundancy against prior attacks—enabling, for the first time, simultaneous optimization of scale, diversity, and success rate. Experiments demonstrate statistically significant improvements over baselines across two evaluation tasks: attack success rate increases markedly, Jaccard similarity decreases by 62%, diversity improves by over 2.3×, and thousands of high-quality, low-overlap adversarial prompts are generated.
📝 Abstract
Automated red teaming can discover rare model failures and generate challenging examples that can be used for training or evaluation. However, a core challenge in automated red teaming is ensuring that the attacks are both diverse and effective. Prior methods typically succeed in optimizing either for diversity or for effectiveness, but rarely both. In this paper, we provide methods that enable automated red teaming to generate a large number of diverse and successful attacks. Our approach decomposes the task into two steps: (1) automated methods for generating diverse attack goals and (2) generating effective attacks for those goals. While we provide multiple straightforward methods for generating diverse goals, our key contributions are to train an RL attacker that both follows those goals and generates diverse attacks for those goals. First, we demonstrate that it is easy to use a large language model (LLM) to generate diverse attacker goals with per-goal prompts and rewards, including rule-based rewards (RBRs) to grade whether the attacks are successful for the particular goal. Second, we demonstrate how training the attacker model with multi-step RL, where the model is rewarded for generating attacks that are different from past attempts further increases diversity while remaining effective. We use our approach to generate both prompt injection attacks and prompts that elicit unsafe responses. In both cases, we find that our approach is able to generate highly-effective and considerably more diverse attacks than past general red-teaming approaches.