AutoRedTeamer: Autonomous Red Teaming with Lifelong Attack Integration

📅 2025-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current LLM red-teaming approaches suffer from heavy human reliance and insufficient coverage of adversarial attack vectors. To address this, we propose the first end-to-end automated red-teaming framework, featuring a dual-agent collaborative architecture—comprising a Red-Team Executor Agent and a Strategy Proposer Agent—integrated with memory-augmented retrieval and LLM-driven attack generation and optimization. The framework enables automatic test-case construction from high-level risk categories and supports continual evolution of attack strategies via integration of state-of-the-art research, thereby realizing lifelong learning–enabled safety evaluation. It dynamically incorporates novel attack vectors, significantly enhancing generalizability and adaptability. Evaluated on HarmBench, our framework achieves a 20% higher attack success rate against Llama-3.1-70B compared to prior methods, reduces computational cost by 46%, and attains test-case diversity on par with human-authored baselines.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) become increasingly capable, security and safety evaluation are crucial. While current red teaming approaches have made strides in assessing LLM vulnerabilities, they often rely heavily on human input and lack comprehensive coverage of emerging attack vectors. This paper introduces AutoRedTeamer, a novel framework for fully automated, end-to-end red teaming against LLMs. AutoRedTeamer combines a multi-agent architecture with a memory-guided attack selection mechanism to enable continuous discovery and integration of new attack vectors. The dual-agent framework consists of a red teaming agent that can operate from high-level risk categories alone to generate and execute test cases and a strategy proposer agent that autonomously discovers and implements new attacks by analyzing recent research. This modular design allows AutoRedTeamer to adapt to emerging threats while maintaining strong performance on existing attack vectors. We demonstrate AutoRedTeamer's effectiveness across diverse evaluation settings, achieving 20% higher attack success rates on HarmBench against Llama-3.1-70B while reducing computational costs by 46% compared to existing approaches. AutoRedTeamer also matches the diversity of human-curated benchmarks in generating test cases, providing a comprehensive, scalable, and continuously evolving framework for evaluating the security of AI systems.
Problem

Research questions and friction points this paper is trying to address.

Automates red teaming for LLM security evaluation
Integrates new attack vectors autonomously
Improves attack success rates and reduces costs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent architecture for automated red teaming
Memory-guided attack selection for continuous threat integration
Dual-agent framework combining risk assessment and strategy proposal
🔎 Similar Papers
No similar papers found.