EraseFlow: Learning Concept Erasure Policies via GFlowNet-Driven Alignment

📅 2025-11-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of erasing harmful or proprietary concepts in diffusion models without fine-tuning, adversarial losses, or human-provided reward signals. We formulate concept erasure as a policy exploration problem over the denoising trajectory space and propose a novel forgetting method that employs Generative Flow Networks (GFlowNets) to learn trajectory-level balanced policies—thereby guiding the generative process to naturally avoid target concepts. Our approach achieves zero-shot generalization to unseen concepts for the first time, circumventing reward hacking and performance collapse. Extensive evaluations across multiple benchmarks demonstrate significant improvements over state-of-the-art methods: it attains more robust and efficient concept erasure while preserving image fidelity and the original model’s functional capabilities.

Technology Category

Application Category

📝 Abstract
Erasing harmful or proprietary concepts from powerful text to image generators is an emerging safety requirement, yet current "concept erasure" techniques either collapse image quality, rely on brittle adversarial losses, or demand prohibitive retraining cycles. We trace these limitations to a myopic view of the denoising trajectories that govern diffusion based generation. We introduce EraseFlow, the first framework that casts concept unlearning as exploration in the space of denoising paths and optimizes it with GFlowNets equipped with the trajectory balance objective. By sampling entire trajectories rather than single end states, EraseFlow learns a stochastic policy that steers generation away from target concepts while preserving the model's prior. EraseFlow eliminates the need for carefully crafted reward models and by doing this, it generalizes effectively to unseen concepts and avoids hackable rewards while improving the performance. Extensive empirical results demonstrate that EraseFlow outperforms existing baselines and achieves an optimal trade off between performance and prior preservation.
Problem

Research questions and friction points this paper is trying to address.

Erasing harmful concepts from text-to-image generators safely
Overcoming limitations of current concept erasure techniques
Learning stochastic policies to preserve model prior while removing targets
Innovation

Methods, ideas, or system contributions that make the work stand out.

GFlowNet-driven alignment optimizes denoising path exploration
Learns stochastic policy to steer generation from target concepts
Eliminates need for crafted reward models to prevent hacking
🔎 Similar Papers