Erased, But Not Forgotten: Erased Rectified Flow Transformers Still Remain Unsafe Under Concept Attack

📅 2025-10-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work exposes a critical robustness vulnerability in concept erasure for Rectified Flow (RF)-based text-to-image (T2I) models, specifically under attention localization mechanisms, rendering them susceptible to targeted attacks. To address this, we propose ReFlux—the first dedicated attack method for evaluating erasure robustness—comprising three key components: (1) reverse-attention optimization to identify critical neurons, (2) velocity-guided flow matching for dynamic refinement, and (3) a consistency-preserving objective that restores erased concepts without compromising image structure. Extensive experiments demonstrate that ReFlux consistently bypasses state-of-the-art erasure methods, achieving high-fidelity concept reactivation across multiple RF-based T2I models. Our work establishes the first systematic benchmark for assessing the security of concept erasure in diffusion-style generative models.

Technology Category

Application Category

📝 Abstract
Recent advances in text-to-image (T2I) diffusion models have enabled impressive generative capabilities, but they also raise significant safety concerns due to the potential to produce harmful or undesirable content. While concept erasure has been explored as a mitigation strategy, most existing approaches and corresponding attack evaluations are tailored to Stable Diffusion (SD) and exhibit limited effectiveness when transferred to next-generation rectified flow transformers such as Flux. In this work, we present ReFlux, the first concept attack method specifically designed to assess the robustness of concept erasure in the latest rectified flow-based T2I framework. Our approach is motivated by the observation that existing concept erasure techniques, when applied to Flux, fundamentally rely on a phenomenon known as attention localization. Building on this insight, we propose a simple yet effective attack strategy that specifically targets this property. At its core, a reverse-attention optimization strategy is introduced to effectively reactivate suppressed signals while stabilizing attention. This is further reinforced by a velocity-guided dynamic that enhances the robustness of concept reactivation by steering the flow matching process, and a consistency-preserving objective that maintains the global layout and preserves unrelated content. Extensive experiments consistently demonstrate the effectiveness and efficiency of the proposed attack method, establishing a reliable benchmark for evaluating the robustness of concept erasure strategies in rectified flow transformers.
Problem

Research questions and friction points this paper is trying to address.

Assessing concept erasure robustness in rectified flow transformers
Targeting attention localization vulnerability in erased Flux models
Reactivating suppressed harmful concepts through reverse-attention optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reverse-attention optimization reactivates suppressed concept signals
Velocity-guided dynamic enhances concept reactivation robustness
Consistency-preserving objective maintains layout while attacking
🔎 Similar Papers
No similar papers found.
N
Nanxiang Jiang
Beihang University
Z
Zhaoxin Fan
Beihang University
E
Enhan Kang
Beihang University
Daiheng Gao
Daiheng Gao
DINQ
AIGC
Y
Yun Zhou
NUDT
Y
Yanxia Chang
Beihang University
Z
Zheng Zhu
Giga AI
Yeying Jin
Yeying Jin
Tencent | National University of Singapore
Computer VisionAIGCGenAIMLLMVLM
W
Wenjun Wu
Beihang University