🤖 AI Summary
Multimodal large language models (MLLMs) exhibit significant vulnerability to jailbreaking attacks, yet lack standardized, systematic safety evaluation benchmarks. Method: This project launches the ATLAS 2025 International Challenge—the first standardized safety benchmark for MLLMs—introducing a two-stage adversarial vision-language attack paradigm. It integrates adversarial image generation, prompt injection perturbation, cross-modal semantic alignment analysis, and a red-teaming framework to enable coordinated white-box and black-box stress testing. The methodology ensures reproducible and scalable safety assessment. Contribution/Results: The challenge attracted 86 international teams; empirical results exposed pronounced fragility of mainstream MLLMs under joint vision-language attacks. All code, datasets, and evaluation protocols are fully open-sourced, establishing a foundational infrastructure and setting a new methodological benchmark for multimodal AI safety research.
📝 Abstract
Multimodal Large Language Models (MLLMs) have enabled transformative advancements across diverse applications but remain susceptible to safety threats, especially jailbreak attacks that induce harmful outputs. To systematically evaluate and improve their safety, we organized the Adversarial Testing&Large-model Alignment Safety Grand Challenge (ATLAS) 2025}. This technical report presents findings from the competition, which involved 86 teams testing MLLM vulnerabilities via adversarial image-text attacks in two phases: white-box and black-box evaluations. The competition results highlight ongoing challenges in securing MLLMs and provide valuable guidance for developing stronger defense mechanisms. The challenge establishes new benchmarks for MLLM safety evaluation and lays groundwork for advancing safer multimodal AI systems. The code and data for this challenge are openly available at https://github.com/NY1024/ATLAS_Challenge_2025.