ExtremeAIGC: Benchmarking LMM Vulnerability to AI-Generated Extremist Content

📅 2025-03-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LMM safety evaluation datasets lack coverage of AI-generated extremist content, limiting their ability to expose models’ true vulnerabilities. To address this gap, we propose ExtremeAIGC—a novel, comprehensive benchmark for assessing the robustness of multimodal large language models (LMMs) against AI-generated extremist text–image pairs. ExtremeAIGC systematically integrates high-fidelity extremist images synthesized by state-of-the-art text-to-image models (e.g., SDXL, DALL·E 3, Flux), incorporates historical event coverage, cross-model diversity, and text–image co-attack design, and introduces adversarial prompt engineering alongside multi-dimensional attack strategies. Experimental results reveal that mainstream LMMs fail to filter extremist content at rates of 72%–94%, exposing critical weaknesses in current safety mechanisms. ExtremeAIGC provides a reproducible, extensible, and quantitatively rigorous evaluation framework, establishing a foundational resource for advancing LMM safety and defense research.

Technology Category

Application Category

📝 Abstract
Large Multimodal Models (LMMs) are increasingly vulnerable to AI-generated extremist content, including photorealistic images and text, which can be used to bypass safety mechanisms and generate harmful outputs. However, existing datasets for evaluating LMM robustness offer limited exploration of extremist content, often lacking AI-generated images, diverse image generation models, and comprehensive coverage of historical events, which hinders a complete assessment of model vulnerabilities. To fill this gap, we introduce ExtremeAIGC, a benchmark dataset and evaluation framework designed to assess LMM vulnerabilities against such content. ExtremeAIGC simulates real-world events and malicious use cases by curating diverse text- and image-based examples crafted using state-of-the-art image generation techniques. Our study reveals alarming weaknesses in LMMs, demonstrating that even cutting-edge safety measures fail to prevent the generation of extremist material. We systematically quantify the success rates of various attack strategies, exposing critical gaps in current defenses and emphasizing the need for more robust mitigation strategies.
Problem

Research questions and friction points this paper is trying to address.

Assessing LMM vulnerability to AI-generated extremist content.
Identifying gaps in current defenses against harmful outputs.
Developing robust mitigation strategies for extremist material.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces ExtremeAIGC benchmark dataset
Uses state-of-the-art image generation techniques
Systematically quantifies attack strategy success rates
🔎 Similar Papers
No similar papers found.