🤖 AI Summary
This work addresses the critical security vulnerabilities of multimodal large language models (MLLMs), which are inadequately evaluated by existing red-teaming approaches that are fragmented, limited to single-turn textual interactions, and lack scalability. To overcome these limitations, we propose the first unified, modular, and high-throughput red-teaming framework for systematic safety evaluation of MLLMs. By decoupling five key dimensions—model integration, data management, attack strategies, judgment mechanisms, and evaluation metrics—the framework enables scalable, automated, multi-turn, and cross-modal adversarial testing. Its core innovation lies in an adversarial kernel architecture that disentangles red-teaming logic from a high-throughput asynchronous runtime. Integrated with 37 attack methods, the framework achieves an average attack success rate of 49.14% across 20 state-of-the-art models, revealing that reasoning capability does not imply robustness against jailbreak attacks. We also release a sustainable and maintainable evaluation infrastructure to support ongoing research.
📝 Abstract
The rapid integration of Multimodal Large Language Models (MLLMs) into critical applications is increasingly hindered by persistent safety vulnerabilities. However, existing red-teaming benchmarks are often fragmented, limited to single-turn text interactions, and lack the scalability required for systematic evaluation. To address this, we introduce OpenRT, a unified, modular, and high-throughput red-teaming framework designed for comprehensive MLLM safety evaluation. At its core, OpenRT architects a paradigm shift in automated red-teaming by introducing an adversarial kernel that enables modular separation across five critical dimensions: model integration, dataset management, attack strategies, judging methods, and evaluation metrics. By standardizing attack interfaces, it decouples adversarial logic from a high-throughput asynchronous runtime, enabling systematic scaling across diverse models. Our framework integrates 37 diverse attack methodologies, spanning white-box gradients, multi-modal perturbations, and sophisticated multi-agent evolutionary strategies. Through an extensive empirical study on 20 advanced models (including GPT-5.2, Claude 4.5, and Gemini 3 Pro), we expose critical safety gaps: even frontier models fail to generalize across attack paradigms, with leading models exhibiting average Attack Success Rates as high as 49.14%. Notably, our findings reveal that reasoning models do not inherently possess superior robustness against complex, multi-turn jailbreaks. By open-sourcing OpenRT, we provide a sustainable, extensible, and continuously maintained infrastructure that accelerates the development and standardization of AI safety.