RvB: Automating AI System Hardening via Iterative Red-Blue Games

📅 2026-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the absence of a unified framework for autonomous hardening of AI systems in dynamic adversarial environments, particularly in the context where large language models possess both offensive and defensive capabilities. The authors propose a Red-vs-Blue (RvB) framework that introduces, for the first time, an iterative red-teaming versus blue-teaming paradigm into AI safety. This approach establishes a training-free, sequential game of incomplete information, wherein the red team actively exposes vulnerabilities to drive the blue team to dynamically generate defense strategies—enabling continuous, parameter-free self-hardening. By avoiding overfitting to specific attacks, the method yields generalizable defense principles. Empirical evaluations on CVE code hardening and jailbreak prevention tasks demonstrate defense success rates of 90% and 45%, respectively, with near-zero false positive rates, substantially outperforming existing baselines.

Technology Category

Application Category

📝 Abstract
The dual offensive and defensive utility of Large Language Models (LLMs) highlights a critical gap in AI security: the lack of unified frameworks for dynamic, iterative adversarial adaptation hardening. To bridge this gap, we propose the Red Team vs. Blue Team (RvB) framework, formulated as a training-free, sequential, imperfect-information game. In this process, the Red Team exposes vulnerabilities, driving the Blue Team to learning effective solutions without parameter updates. We validate our framework across two challenging domains: dynamic code hardening against CVEs and guardrail optimization against jailbreaks. Our empirical results show that this interaction compels the Blue Team to learn fundamental defensive principles, leading to robust remediations that are not merely overfitted to specific exploits. RvB achieves Defense Success Rates of 90\% and 45\% across the respective tasks while maintaining near 0\% False Positive Rates, significantly surpassing baselines. This work establishes the iterative adversarial interaction framework as a practical paradigm that automates the continuous hardening of AI systems.
Problem

Research questions and friction points this paper is trying to address.

AI security
adversarial adaptation
system hardening
Large Language Models
red-blue teaming
Innovation

Methods, ideas, or system contributions that make the work stand out.

Red-Blue Teaming
Adversarial Hardening
Training-Free Defense
Iterative Security
LLM Security
🔎 Similar Papers
No similar papers found.
L
Lige Huang
Shanghai Artificial Intelligence Laboratory; Institute of Information Engineering, Chinese Academy of Sciences
Z
Zicheng Liu
Shanghai Artificial Intelligence Laboratory; Shanghai Jiao Tong University
Jie Zhang
Jie Zhang
Unknown affiliation
L
Lewen Yan
Shanghai Artificial Intelligence Laboratory
D
Dongrui Liu
Shanghai Artificial Intelligence Laboratory
Jing Shao
Jing Shao
Research Scientist, Shanghai AI Laboratory/Shanghai Jiao Tong University
Computer VisionMulti-Modal Large Language Model