🤖 AI Summary
This work addresses the absence of a unified framework for autonomous hardening of AI systems in dynamic adversarial environments, particularly in the context where large language models possess both offensive and defensive capabilities. The authors propose a Red-vs-Blue (RvB) framework that introduces, for the first time, an iterative red-teaming versus blue-teaming paradigm into AI safety. This approach establishes a training-free, sequential game of incomplete information, wherein the red team actively exposes vulnerabilities to drive the blue team to dynamically generate defense strategies—enabling continuous, parameter-free self-hardening. By avoiding overfitting to specific attacks, the method yields generalizable defense principles. Empirical evaluations on CVE code hardening and jailbreak prevention tasks demonstrate defense success rates of 90% and 45%, respectively, with near-zero false positive rates, substantially outperforming existing baselines.
📝 Abstract
The dual offensive and defensive utility of Large Language Models (LLMs) highlights a critical gap in AI security: the lack of unified frameworks for dynamic, iterative adversarial adaptation hardening. To bridge this gap, we propose the Red Team vs. Blue Team (RvB) framework, formulated as a training-free, sequential, imperfect-information game. In this process, the Red Team exposes vulnerabilities, driving the Blue Team to learning effective solutions without parameter updates. We validate our framework across two challenging domains: dynamic code hardening against CVEs and guardrail optimization against jailbreaks. Our empirical results show that this interaction compels the Blue Team to learn fundamental defensive principles, leading to robust remediations that are not merely overfitted to specific exploits. RvB achieves Defense Success Rates of 90\% and 45\% across the respective tasks while maintaining near 0\% False Positive Rates, significantly surpassing baselines. This work establishes the iterative adversarial interaction framework as a practical paradigm that automates the continuous hardening of AI systems.