🤖 AI Summary
This study addresses critical security risks in AI-assisted programming—including model jailbreaking, prompt injection, and safety alignment failure—by proposing a novel multi-step reasoning paradigm for safety alignment. The approach enables dynamic detection of multi-turn jailbreaking attacks, establishes robust defensive mechanisms, and facilitates efficient safety probing of large language models (LLMs). Methodologically, the project develops an automated red-teaming bot, an adversarial evaluation framework, a coding-specialized customized LLM, and a tournament orchestration service to coordinate global university-based security competitions. Key contributions include: (1) the first large-scale, high-quality safety-annotated dataset specifically for AI programming assistants; (2) a reproducible end-to-end evaluation pipeline; and (3) multiple state-of-the-art techniques for safety alignment and red-team testing. These advances collectively enhance the trustworthiness and robustness of AI coding assistants against adversarial manipulation.
📝 Abstract
AI systems for software development are rapidly gaining prominence, yet significant challenges remain in ensuring their safety. To address this, Amazon launched the Trusted AI track of the Amazon Nova AI Challenge, a global competition among 10 university teams to drive advances in secure AI. In the challenge, five teams focus on developing automated red teaming bots, while the other five create safe AI assistants. This challenge provides teams with a unique platform to evaluate automated red-teaming and safety alignment methods through head-to-head adversarial tournaments where red teams have multi-turn conversations with the competing AI coding assistants to test their safety alignment. Along with this, the challenge provides teams with a feed of high quality annotated data to fuel iterative improvement. Throughout the challenge, teams developed state-of-the-art techniques, introducing novel approaches in reasoning-based safety alignment, robust model guardrails, multi-turn jail-breaking, and efficient probing of large language models (LLMs). To support these efforts, the Amazon Nova AI Challenge team made substantial scientific and engineering investments, including building a custom baseline coding specialist model for the challenge from scratch, developing a tournament orchestration service, and creating an evaluation harness. This paper outlines the advancements made by university teams and the Amazon Nova AI Challenge team in addressing the safety challenges of AI for software development, highlighting this collaborative effort to raise the bar for AI safety.