Enhancing Privacy-Utility Trade-offs to Mitigate Memorization in Diffusion Models

📅 2025-04-25
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Text-to-image diffusion models often memorize training data, posing privacy leakage and copyright infringement risks; existing privacy-preserving methods typically degrade generative utility—particularly text alignment. To address this trade-off, we propose PRSS, a协同 optimization framework comprising Prompt Re-anchoring (PR) to attenuate memorization of training images and Semantic Search (SS) to preserve strong text alignment. We further introduce a gradient-guided re-anchoring strategy, cross-embedding-space semantic retrieval, and privacy-controllable noise scheduling. Under multiple privacy budgets, PRSS consistently outperforms state-of-the-art methods: text alignment scores improve by up to 12.7%, while training image reconstruction rates drop below 0.3%. Notably, PRSS is the first approach to simultaneously achieve high privacy protection and enhanced generation quality.

Technology Category

Application Category

📝 Abstract
Text-to-image diffusion models have demonstrated remarkable capabilities in creating images highly aligned with user prompts, yet their proclivity for memorizing training set images has sparked concerns about the originality of the generated images and privacy issues, potentially leading to legal complications for both model owners and users, particularly when the memorized images contain proprietary content. Although methods to mitigate these issues have been suggested, enhancing privacy often results in a significant decrease in the utility of the outputs, as indicated by text-alignment scores. To bridge the research gap, we introduce a novel method, PRSS, which refines the classifier-free guidance approach in diffusion models by integrating prompt re-anchoring (PR) to improve privacy and incorporating semantic prompt search (SS) to enhance utility. Extensive experiments across various privacy levels demonstrate that our approach consistently improves the privacy-utility trade-off, establishing a new state-of-the-art.
Problem

Research questions and friction points this paper is trying to address.

Mitigating memorization in diffusion models to protect privacy
Balancing privacy and utility in generated images
Improving text-alignment without compromising training data privacy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces PRSS method for diffusion models
Uses prompt re-anchoring to enhance privacy
Incorporates semantic search to improve utility
C
Chen Chen
School of Computer Science, Faculty of Engineering, The University of Sydney, Australia
Daochang Liu
Daochang Liu
Lecturer, University of Western Australia
Computer VisionGenerative AIHuman Action UnderstandingHealthcare Data Science
Mubarak Shah
Mubarak Shah
Trustee Chair Professor of Computer Science, University of Central Florida
Computer Vision
C
Chang Xu
School of Computer Science, Faculty of Engineering, The University of Sydney, Australia