Revoking Amnesia: RL-based Trajectory Optimization to Resurrect Erased Concepts in Diffusion Models

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing concept erasure techniques fail on modern diffusion models (e.g., Flux), as they merely perturb the sampling trajectory rather than genuinely removing the target concept—rendering erasure reversible and producing only an illusory “amnesia.” This work introduces RevAm, the first framework to incorporate Group Relative Policy Optimization (GRPO) into diffusion models. RevAm performs end-to-end, trajectory-level reward optimization of the denoising path without altering model weights, enabling efficient and faithful revival of erased concepts. By escaping local optima inherent in prior methods, RevAm achieves high-fidelity concept recovery across diverse architectures—including SDXL and Flux—outperforming all baselines in both fidelity and robustness. It improves computational efficiency by 10× and systematically exposes a fundamental vulnerability in current safety mechanisms: their reliance on superficial trajectory manipulation rather than true conceptual disentanglement.

Technology Category

Application Category

📝 Abstract
Concept erasure techniques have been widely deployed in T2I diffusion models to prevent inappropriate content generation for safety and copyright considerations. However, as models evolve to next-generation architectures like Flux, established erasure methods ( extit{e.g.}, ESD, UCE, AC) exhibit degraded effectiveness, raising questions about their true mechanisms. Through systematic analysis, we reveal that concept erasure creates only an illusion of ``amnesia": rather than genuine forgetting, these methods bias sampling trajectories away from target concepts, making the erasure fundamentally reversible. This insight motivates the need to distinguish superficial safety from genuine concept removal. In this work, we propose extbf{RevAm} (underline{Rev}oking underline{Am}nesia), an RL-based trajectory optimization framework that resurrects erased concepts by dynamically steering the denoising process without modifying model weights. By adapting Group Relative Policy Optimization (GRPO) to diffusion models, RevAm explores diverse recovery trajectories through trajectory-level rewards, overcoming local optima that limit existing methods. Extensive experiments demonstrate that RevAm achieves superior concept resurrection fidelity while reducing computational time by 10$ imes$, exposing critical vulnerabilities in current safety mechanisms and underscoring the need for more robust erasure techniques beyond trajectory manipulation.
Problem

Research questions and friction points this paper is trying to address.

Investigating illusionary concept erasure in diffusion models safety mechanisms
Developing RL framework to resurrect erased concepts without weight modification
Exposing vulnerabilities in current safety methods through trajectory optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

RL-based trajectory optimization for concept resurrection
Dynamic denoising steering without weight modification
Group Relative Policy Optimization adapted to diffusion models
🔎 Similar Papers
No similar papers found.