🤖 AI Summary
Current research on jailbreak attacks against large language models lacks a unified and reproducible evaluation benchmark, hindering meaningful comparison of results. To address this gap, this work proposes Jailbreak Foundry, a system that enables the first automated translation of jailbreak attack methodologies from published papers into executable, standardized modules. The system integrates multi-agent collaborative parsing, a shared component library (JBF-LIB), a module generator (JBF-FORGE), and a standardized evaluator (JBF-EVAL), with GPT-4o serving as a consistent adjudicator. In experiments, Jailbreak Foundry successfully reproduced 30 distinct attacks with an average success rate deviation of only +0.26 percentage points, achieved an 82.5% code reuse rate, and reduced attack-specific code volume by nearly half, substantially improving reproducibility efficiency and consistency.
📝 Abstract
Jailbreak techniques for large language models (LLMs) evolve faster than benchmarks, making robustness estimates stale and difficult to compare across papers due to drift in datasets, harnesses, and judging protocols. We introduce JAILBREAK FOUNDRY (JBF), a system that addresses this gap via a multi-agent workflow to translate jailbreak papers into executable modules for immediate evaluation within a unified harness. JBF features three core components: (i) JBF-LIB for shared contracts and reusable utilities; (ii) JBF-FORGE for the multi-agent paper-to-module translation; and (iii) JBF-EVAL for standardizing evaluations. Across 30 reproduced attacks, JBF achieves high fidelity with a mean (reproduced-reported) attack success rate (ASR) deviation of +0.26 percentage points. By leveraging shared infrastructure, JBF reduces attack-specific implementation code by nearly half relative to original repositories and achieves an 82.5% mean reused-code ratio. This system enables a standardized AdvBench evaluation of all 30 attacks across 10 victim models using a consistent GPT-4o judge. By automating both attack integration and standardized evaluation, JBF offers a scalable solution for creating living benchmarks that keep pace with the rapidly shifting security landscape.