EraseAnything: Enabling Concept Erasure in Rectified Flow Transformers

📅 2024-12-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Emerging flow-matching text-to-image (T2I) models—such as Stable Diffusion v3 and Flux—exhibit limited controllability in precisely erasing sensitive concepts (e.g., brands, individuals) without degrading generation quality. Method: This work introduces the first concept-erasure framework tailored to flow-matching + Transformer architectures. It proposes a two-level optimization framework integrating attention-map regularization and dual-stage self-contrastive learning to jointly ensure erasure specificity and generalization robustness; further enhanced by LoRA fine-tuning, attention-visualization constraints, and flow-matching gradient alignment. Results: The method achieves state-of-the-art performance across diverse sensitive-concept erasure tasks: 37% higher erasure success rate, 98.2% fidelity preservation for unrelated concepts, and no statistically significant degradation in generation quality. Core contribution: The first controllable, concept-level erasure paradigm for flow-matching T2I models, underpinned by an efficient, synergistic optimization mechanism.

Technology Category

Application Category

📝 Abstract
Removing unwanted concepts from large-scale text-to-image (T2I) diffusion models while maintaining their overall generative quality remains an open challenge. This difficulty is especially pronounced in emerging paradigms, such as Stable Diffusion (SD) v3 and Flux, which incorporate flow matching and transformer-based architectures. These advancements limit the transferability of existing concept-erasure techniques that were originally designed for the previous T2I paradigm (e.g., SD v1.4). In this work, we introduce EraseAnything, the first method specifically developed to address concept erasure within the latest flow-based T2I framework. We formulate concept erasure as a bi-level optimization problem, employing LoRA-based parameter tuning and an attention map regularizer to selectively suppress undesirable activations. Furthermore, we propose a self-contrastive learning strategy to ensure that removing unwanted concepts does not inadvertently harm performance on unrelated ones. Experimental results demonstrate that EraseAnything successfully fills the research gap left by earlier methods in this new T2I paradigm, achieving state-of-the-art performance across a wide range of concept erasure tasks.
Problem

Research questions and friction points this paper is trying to address.

Information Filtering
Model Performance
Unwanted Concepts Removal
Innovation

Methods, ideas, or system contributions that make the work stand out.

EraseAnything
Selective Information Ignoring
Performance Preservation
🔎 Similar Papers
No similar papers found.
Daiheng Gao
Daiheng Gao
DINQ
AIGC
Shilin Lu
Shilin Lu
Nanyang Technological University
Generative Models
S
Shaw Walters
Eliza Labs
W
Wenbo Zhou
USTC
J
Jiaming Chu
BUPT
J
Jie Zhang
A*STAR
B
Bang Zhang
Alibaba Tongyi lab
Mengxi Jia
Mengxi Jia
Peking University, Institute of Artificial Intelligence, China Telecom(中国电信人工智能研究院, TeleAI)
Machine LearningDeep Representation LearningLVLM
J
Jian Zhao
TeleAI
Z
Zhaoxin Fan
Beihang University
W
Weiming Zhang
USTC