🤖 AI Summary
Existing machine unlearning methods struggle to fully erase unauthorized concepts from diffusion models, often leaving recoverable residual information. This work addresses this limitation by formulating concept recovery as a reinforcement learning task and introducing RLVR, a reinforcement learning–based prompt search framework. RLVR leverages the diffusion model’s noise prediction loss as an intrinsic reward signal to learn a global policy in the text prompt space for generating adversarial prompts. This approach shifts the paradigm from sample-wise optimization to strategy-driven recovery, substantially improving both efficiency and generalization. Empirical results demonstrate that RLVR enables near real-time, fine-grained reconstruction of identities and artistic styles across multiple state-of-the-art unlearning methods, significantly enhancing the scalability of red-teaming evaluations against unseen diffusion models.
📝 Abstract
Machine unlearning is a key defense mechanism for removing unauthorized concepts from text-to-image diffusion models, yet recent evidence shows that latent visual information often persists after unlearning. Existing adversarial approaches for exploiting this leakage are constrained by fundamental limitations: optimization-based methods are computationally expensive due to per-instance iterative search. At the same time, reasoning-based and heuristic techniques lack direct feedback from the target model's latent visual representations. To address these challenges, we introduce ReLAPSe, a policy-based adversarial framework that reformulates concept restoration as a reinforcement learning problem. ReLAPSe trains an agent using Reinforcement Learning with Verifiable Rewards (RLVR), leveraging the diffusion model's noise prediction loss as a model-intrinsic and verifiable feedback signal. This closed-loop design directly aligns textual prompt manipulation with latent visual residuals, enabling the agent to learn transferable restoration strategies rather than optimizing isolated prompts. By pioneering the shift from per-instance optimization to global policy learning, ReLAPSe achieves efficient, near-real-time recovery of fine-grained identities and styles across multiple state-of-the-art unlearning methods, providing a scalable tool for rigorous red-teaming of unlearned diffusion models. Some experimental evaluations involve sensitive visual concepts, such as nudity. Code is available at https://github.com/gmum/ReLaPSe