🤖 AI Summary
Existing static security benchmarks and red-teaming tools struggle to detect novel security risks introduced by code agents during dynamic execution—particularly in edge cases involving coordinated multi-jailbreak tool interactions. To address this gap, we propose RedCodeAgent, the first automated red-teaming framework specifically designed for code agents. It innovatively integrates an adaptive memory module, a dynamic tool-composition selection mechanism, and sandboxed execution validation, enabling cross-language, multi-scenario vulnerability discovery. Unlike conventional static analysis, RedCodeAgent employs LLM-driven attack strategy generation and execution-result-aware evaluation, enabling the first systematic identification of composite jailbreak risks. Experiments on mainstream coding assistants—including Cursor and Codeium—demonstrate significantly improved attack success rates, reduced refusal rates, and effective exposure of previously unknown security flaws, thereby validating the framework’s efficacy and scalability.
📝 Abstract
Code agents have gained widespread adoption due to their strong code generation capabilities and integration with code interpreters, enabling dynamic execution, debugging, and interactive programming capabilities. While these advancements have streamlined complex workflows, they have also introduced critical safety and security risks. Current static safety benchmarks and red-teaming tools are inadequate for identifying emerging real-world risky scenarios, as they fail to cover certain boundary conditions, such as the combined effects of different jailbreak tools. In this work, we propose RedCodeAgent, the first automated red-teaming agent designed to systematically uncover vulnerabilities in diverse code agents. With an adaptive memory module, RedCodeAgent can leverage existing jailbreak knowledge, dynamically select the most effective red-teaming tools and tool combinations in a tailored toolbox for a given input query, thus identifying vulnerabilities that might otherwise be overlooked. For reliable evaluation, we develop simulated sandbox environments to additionally evaluate the execution results of code agents, mitigating potential biases of LLM-based judges that only rely on static code. Through extensive evaluations across multiple state-of-the-art code agents, diverse risky scenarios, and various programming languages, RedCodeAgent consistently outperforms existing red-teaming methods, achieving higher attack success rates and lower rejection rates with high efficiency. We further validate RedCodeAgent on real-world code assistants, e.g., Cursor and Codeium, exposing previously unidentified security risks. By automating and optimizing red-teaming processes, RedCodeAgent enables scalable, adaptive, and effective safety assessments of code agents.