Redistribute Ensemble Training for Mitigating Memorization in Diffusion Models

📅 2025-02-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Diffusion models pose a non-negligible risk of memorizing visual training data, yet existing privacy-preserving methods are largely confined to cross-modal (e.g., text-to-image) settings and lack generalizability to pure visual modalities. To address this, we propose the first privacy-enhancing framework specifically designed for visual-domain diffusion models. Our approach employs proxy-model-based model sharding to prevent exposure of raw images during training; introduces a novel paradigm for learning proxy model parameters; and incorporates an IET-AGC+ mechanism that dynamically identifies and reassigns high-memorization samples. Additionally, we integrate loss-driven dynamic data augmentation and a threshold-skipping strategy to further mitigate memorization. Evaluated on four benchmark datasets, our method substantially suppresses memorization—reducing memory scores by 46.7% after fine-tuning Stable Diffusion—while preserving generation fidelity and model utility, thus achieving a principled trade-off between privacy protection and performance.

Technology Category

Application Category

📝 Abstract
Diffusion models, known for their tremendous ability to generate high-quality samples, have recently raised concerns due to their data memorization behavior, which poses privacy risks. Recent methods for memory mitigation have primarily addressed the issue within the context of the text modality in cross-modal generation tasks, restricting their applicability to specific conditions. In this paper, we propose a novel method for diffusion models from the perspective of visual modality, which is more generic and fundamental for mitigating memorization. Directly exposing visual data to the model increases memorization risk, so we design a framework where models learn through proxy model parameters instead. Specially, the training dataset is divided into multiple shards, with each shard training a proxy model, then aggregated to form the final model. Additionally, practical analysis of training losses illustrates that the losses for easily memorable images tend to be obviously lower. Thus, we skip the samples with abnormally low loss values from the current mini-batch to avoid memorizing. However, balancing the need to skip memorization-prone samples while maintaining sufficient training data for high-quality image generation presents a key challenge. Thus, we propose IET-AGC+, which redistributes highly memorizable samples between shards, to mitigate these samples from over-skipping. Furthermore, we dynamically augment samples based on their loss values to further reduce memorization. Extensive experiments and analysis on four datasets show that our method successfully reduces memory capacity while maintaining performance. Moreover, we fine-tune the pre-trained diffusion models, e.g., Stable Diffusion, and decrease the memorization score by 46.7%, demonstrating the effectiveness of our method. Code is available in: https://github.com/liuxiao-guan/IET_AGC.
Problem

Research questions and friction points this paper is trying to address.

Mitigate memorization in diffusion models
Redistribute ensemble training framework
Maintain performance while reducing memory
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proxy model parameters for learning
Redistribution of memorizable samples
Dynamic augmentation based on loss values
🔎 Similar Papers
No similar papers found.
X
Xiaoliu Guan
School of Computer Science, Wuhan University, China
Yu Wu
Yu Wu
University of Cambridge
machine learninghealth sensingmobile health
H
Huayang Huang
School of Computer Science, Wuhan University, China
X
Xiao Liu
School of Computer Science, Wuhan University, China
Jiaxu Miao
Jiaxu Miao
Sun Yat-Sen University
Deep LearningVideo SegmentationFederated Learning
Y
Yi Yang
College of Computer Science and Technology, Zhejiang University, Hangzhou, Zhejiang, China