ReLAPSe: Reinforcement-Learning-trained Adversarial Prompt Search for Erased concepts in unlearned diffusion models

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing machine unlearning methods struggle to fully erase unauthorized concepts from diffusion models, often leaving recoverable residual information. This work addresses this limitation by formulating concept recovery as a reinforcement learning task and introducing RLVR, a reinforcement learning–based prompt search framework. RLVR leverages the diffusion model’s noise prediction loss as an intrinsic reward signal to learn a global policy in the text prompt space for generating adversarial prompts. This approach shifts the paradigm from sample-wise optimization to strategy-driven recovery, substantially improving both efficiency and generalization. Empirical results demonstrate that RLVR enables near real-time, fine-grained reconstruction of identities and artistic styles across multiple state-of-the-art unlearning methods, significantly enhancing the scalability of red-teaming evaluations against unseen diffusion models.

Technology Category

Application Category

📝 Abstract
Machine unlearning is a key defense mechanism for removing unauthorized concepts from text-to-image diffusion models, yet recent evidence shows that latent visual information often persists after unlearning. Existing adversarial approaches for exploiting this leakage are constrained by fundamental limitations: optimization-based methods are computationally expensive due to per-instance iterative search. At the same time, reasoning-based and heuristic techniques lack direct feedback from the target model's latent visual representations. To address these challenges, we introduce ReLAPSe, a policy-based adversarial framework that reformulates concept restoration as a reinforcement learning problem. ReLAPSe trains an agent using Reinforcement Learning with Verifiable Rewards (RLVR), leveraging the diffusion model's noise prediction loss as a model-intrinsic and verifiable feedback signal. This closed-loop design directly aligns textual prompt manipulation with latent visual residuals, enabling the agent to learn transferable restoration strategies rather than optimizing isolated prompts. By pioneering the shift from per-instance optimization to global policy learning, ReLAPSe achieves efficient, near-real-time recovery of fine-grained identities and styles across multiple state-of-the-art unlearning methods, providing a scalable tool for rigorous red-teaming of unlearned diffusion models. Some experimental evaluations involve sensitive visual concepts, such as nudity. Code is available at https://github.com/gmum/ReLaPSe
Problem

Research questions and friction points this paper is trying to address.

machine unlearning
diffusion models
adversarial prompt search
concept leakage
text-to-image generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement Learning
Machine Unlearning
Adversarial Prompt Search
Diffusion Models
Concept Restoration