REFORGE: Multi-modal Attacks Reveal Vulnerable Concept Unlearning in Image Generation Models

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical vulnerability in current image generation models: despite undergoing concept erasure to remove harmful content, they remain highly susceptible to black-box multimodal adversarial attacks, particularly exhibiting insufficient robustness against image-side perturbations. To expose this weakness, the authors propose REFORGE, the first red-teaming framework specifically designed to attack forgetting mechanisms in a black-box setting. REFORGE leverages stroke-based initialization, cross-attention-guided region-aware masking, and optimized adversarial image prompts to efficiently generate semantically aligned perturbations that preserve visual fidelity. Extensive experiments demonstrate that REFORGE significantly increases attack success rates across diverse forgetting tasks and defense strategies, revealing severe security flaws in existing unlearning approaches.

Technology Category

Application Category

📝 Abstract
Recent progress in image generation models (IGMs) enables high-fidelity content creation but also amplifies risks, including the reproduction of copyrighted content and the generation of offensive content. Image Generation Model Unlearning (IGMU) mitigates these risks by removing harmful concepts without full retraining. Despite growing attention, the robustness under adversarial inputs, particularly image-side threats in black-box settings, remains underexplored. To bridge this gap, we present REFORGE, a black-box red-teaming framework that evaluates IGMU robustness via adversarial image prompts. REFORGE initializes stroke-based images and optimizes perturbations with a cross-attention-guided masking strategy that allocates noise to concept-relevant regions, balancing attack efficacy and visual fidelity. Extensive experiments across representative unlearning tasks and defenses demonstrate that REFORGE significantly improves attack success rate while achieving stronger semantic alignment and higher efficiency than involved baselines. These results expose persistent vulnerabilities in current IGMU methods and highlight the need for robustness-aware unlearning against multi-modal adversarial attacks. Our code is at: https://github.com/Imfatnoily/REFORGE.
Problem

Research questions and friction points this paper is trying to address.

image generation model unlearning
adversarial attacks
black-box setting
multi-modal attacks
robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

adversarial image prompts
cross-attention-guided masking
black-box red-teaming
concept unlearning
multi-modal attacks
🔎 Similar Papers
No similar papers found.
Y
Yong Zou
Yunnan University
H
Haoran Li
Northeastern University
F
Fanxiao Li
Yunnan University
S
Shenyang Wei
Yunnan University
Y
Yunyun Dong
Yunnan University
L
Li Tang
Yunnan University
W
Wei Zhou
Yunnan University
Renyang Liu
Renyang Liu
National University of Singapore
AI Security & Data PrivacyMachine UnlearningComputer Vision