Learning diverse attacks on large language models for robust red-teaming and safety tuning

📅 2024-05-28
🏛️ arXiv.org
📈 Citations: 9
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited diversity and mode collapse prevalent in adversarial prompt generation for red-teaming large language models (LLMs). We propose the first fine-grained probabilistic generation framework for attack prompts based on Generative Flow Networks (GFlowNets). Our method mitigates mode collapse via smoothed training and jointly optimizes attack diversity and success rate by integrating toxicity classifier guidance and cross-model transfer training. The generated prompts exhibit strong generalization and transferability across diverse target LLMs—including both safety-finetuned and base models. When used to construct a red-teaming dataset, our prompts significantly improve the robustness of safety-finetuned models against other reinforcement-learning-based red-teaming attacks (+37% success reduction). This work establishes a scalable, high-coverage paradigm for automated, probabilistic adversarial prompt generation, advancing systematic and rigorous safety evaluation of LLMs.

Technology Category

Application Category

📝 Abstract
Red-teaming, or identifying prompts that elicit harmful responses, is a critical step in ensuring the safe and responsible deployment of large language models (LLMs). Developing effective protection against many modes of attack prompts requires discovering diverse attacks. Automated red-teaming typically uses reinforcement learning to fine-tune an attacker language model to generate prompts that elicit undesirable responses from a target LLM, as measured, for example, by an auxiliary toxicity classifier. We show that even with explicit regularization to favor novelty and diversity, existing approaches suffer from mode collapse or fail to generate effective attacks. As a flexible and probabilistically principled alternative, we propose to use GFlowNet fine-tuning, followed by a secondary smoothing phase, to train the attacker model to generate diverse and effective attack prompts. We find that the attacks generated by our method are effective against a wide range of target LLMs, both with and without safety tuning, and transfer well between target LLMs. Finally, we demonstrate that models safety-tuned using a dataset of red-teaming prompts generated by our method are robust to attacks from other RL-based red-teaming approaches.
Problem

Research questions and friction points this paper is trying to address.

Identifying diverse harmful prompts for LLM safety.
Overcoming mode collapse in automated red-teaming methods.
Enhancing LLM robustness against varied attack strategies.
Innovation

Methods, ideas, or system contributions that make the work stand out.

GFlowNet fine-tuning for diverse attack generation
Secondary smoothing phase enhances prompt effectiveness
Robust safety tuning using generated red-teaming prompts
🔎 Similar Papers