Demystifying Foreground-Background Memorization in Diffusion Models

📅 2025-08-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion models (DMs) suffer from localized memorization: they not only verbatim reproduce training images but also replicate fine-grained local patches—especially foreground regions—across multiple training samples sharing the same prompt. Existing detection and mitigation methods lack the capability to characterize or suppress such cross-sample, region-level memorization. To address this, we propose FB-Mem—the first foreground-background memory quantification framework grounded in image segmentation. It integrates feature similarity analysis with clustering to separately measure memorization strength in foreground and background regions. Experiments reveal that localized memorization is significantly more pervasive than previously recognized; model-level deactivation techniques prove largely ineffective against foreground memorization; and our clustering-driven data augmentation strategy substantially reduces localized memorization risk. FB-Mem establishes a novel paradigm for memory governance in diffusion models.

Technology Category

Application Category

📝 Abstract
Diffusion models (DMs) memorize training images and can reproduce near-duplicates during generation. Current detection methods identify verbatim memorization but fail to capture two critical aspects: quantifying partial memorization occurring in small image regions, and memorization patterns beyond specific prompt-image pairs. To address these limitations, we propose Foreground Background Memorization (FB-Mem), a novel segmentation-based metric that classifies and quantifies memorized regions within generated images. Our method reveals that memorization is more pervasive than previously understood: (1) individual generations from single prompts may be linked to clusters of similar training images, revealing complex memorization patterns that extend beyond one-to-one correspondences; and (2) existing model-level mitigation methods, such as neuron deactivation and pruning, fail to eliminate local memorization, which persists particularly in foreground regions. Our work establishes an effective framework for measuring memorization in diffusion models, demonstrates the inadequacy of current mitigation approaches, and proposes a stronger mitigation method using a clustering approach.
Problem

Research questions and friction points this paper is trying to address.

Quantifying partial memorization in small image regions of diffusion models
Detecting memorization patterns beyond specific prompt-image pairs
Evaluating inadequacy of current mitigation methods for local memorization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Segmentation-based metric for memorization quantification
Clustering approach for stronger mitigation method
Reveals complex memorization patterns beyond one-to-one
🔎 Similar Papers
No similar papers found.