Unveiling and Mitigating Memorization in Text-to-image Diffusion Models through Cross Attention

📅 2024-03-17
🏛️ European Conference on Computer Vision
📈 Citations: 27
Influential: 8
📄 PDF
🤖 AI Summary
Text-to-image diffusion models often memorize and reproduce training data, raising serious copyright and privacy concerns. This paper identifies abnormal cross-attention focusing on specific token embeddings as the primary cause of memory leakage. To address this, we propose a plug-and-play, retraining-free framework for memory detection and suppression. Our method integrates cross-attention visualization, token-level attention entropy quantification, dynamic attention masking, and gradient-aware fine-tuning. It preserves generation quality (unchanged FID) and inference speed (latency increase <0.5%), while significantly reducing memory leakage. On models including Stable Diffusion, detection accuracy improves by 32%. Crucially, this work establishes, for the first time, a causal link between attention imbalance and data memorization—providing an efficient, lightweight solution for safe and controllable generative AI.

Technology Category

Application Category

📝 Abstract
Recent advancements in text-to-image diffusion models have demonstrated their remarkable capability to generate high-quality images from textual prompts. However, increasing research indicates that these models memorize and replicate images from their training data, raising tremendous concerns about potential copyright infringement and privacy risks. In our study, we provide a novel perspective to understand this memorization phenomenon by examining its relationship with cross-attention mechanisms. We reveal that during memorization, the cross-attention tends to focus disproportionately on the embeddings of specific tokens. The diffusion model is overfitted to these token embeddings, memorizing corresponding training images. To elucidate this phenomenon, we further identify and discuss various intrinsic findings of cross-attention that contribute to memorization. Building on these insights, we introduce an innovative approach to detect and mitigate memorization in diffusion models. The advantage of our proposed method is that it will not compromise the speed of either the training or the inference processes in these models while preserving the quality of generated images. Our code is available at https://github.com/renjie3/MemAttn .
Problem

Research questions and friction points this paper is trying to address.

Understanding memorization in text-to-image diffusion models via cross-attention mechanisms
Detecting and mitigating memorization without compromising model speed or image quality
Addressing copyright and privacy risks from replicated training data in diffusion models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzes memorization via cross-attention mechanisms
Detects overfitting to specific token embeddings
Mitigates memorization without compromising speed
🔎 Similar Papers
No similar papers found.