Not Just Text: Uncovering Vision Modality Typographic Threats in Image Generation Models

📅 2024-12-07
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work uncovers a previously overlooked security threat to image generation models in the visual modality—specifically, glyph-level perturbations that stealthily manipulate multimodal semantics during realistic image editing, posing risks of copyright infringement and malicious content generation. To address this, we propose the first visual-modality glyph attack paradigm and introduce VMT-IGMs, the first benchmark dataset for evaluating visual vulnerabilities of image generation models (comprising 2,400+ samples). We validate attack efficacy across mainstream models—including Stable Diffusion, SDXL, and DALL·E 3—via cross-model red-teaming, vision-language alignment analysis, and quantitative defense robustness evaluation. Our experiments reveal that all 12 existing text-centric defenses fail catastrophically under visual glyph attacks, achieving less than 8% success rate, thereby demonstrating their complete ineffectiveness against such modality-specific threats.

Technology Category

Application Category

📝 Abstract
Current image generation models can effortlessly produce high-quality, highly realistic images, but this also increases the risk of misuse. In various Text-to-Image or Image-to-Image tasks, attackers can generate a series of images containing inappropriate content by simply editing the language modality input. To mitigate this security concern, numerous guarding or defensive strategies have been proposed, with a particular emphasis on safeguarding language modality. However, in practical applications, threats in the vision modality, particularly in tasks involving the editing of real-world images, present heightened security risks as they can easily infringe upon the rights of the image owner. Therefore, this paper employs a method named typographic attack to reveal that various image generation models are also susceptible to threats within the vision modality. Furthermore, we also evaluate the defense performance of various existing methods when facing threats in the vision modality and uncover their ineffectiveness. Finally, we propose the Vision Modal Threats in Image Generation Models (VMT-IGMs) dataset, which would serve as a baseline for evaluating the vision modality vulnerability of various image generation models.
Problem

Research questions and friction points this paper is trying to address.

Uncovering vision modality threats in image generation models
Evaluating defense failures against vision modality attacks
Proposing dataset to assess vision modality vulnerabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Typographic attack reveals vision modality threats
Evaluates defense performance against vision threats
Proposes VMT-IGMs dataset for vulnerability assessment
🔎 Similar Papers
No similar papers found.